Tweet This! :)

Thursday, May 14, 2026

Cable television started with only a few channels

@Mark Ollig


In the late 1940s, people in many small towns struggled to get good television reception because broadcast towers were too far away, or hills and mountains blocked the signals.

In 1948, John Walson Sr. of Mahanoy City, PA, found a practical way to improve local television reception.

He ran Army-surplus, heavy-duty twin-lead cable from a mountain antenna into town, creating an early cable television system.

The system brought in clear signals from channels 3, 6, and 10 out of Philadelphia and helped him sell more television sets at his appliance store.

“One of the things that got me interested in going into cable TV in a large way was the crowd that gathered in front of my store,” Walson said during a July 21, 1970, oral history interview conducted at his Service Electric Cable TV office, formerly his appliance store.

“When I first put those three channels on, the street was completely blocked with viewers, people watching the pictures in the window,” Walson Sr. said.

In 1948, Ed Parsons of Astoria, OR, built one of the first community antenna television, or CATV, systems in the United States.

At the time, Parsons owned a local radio station and set up a small receiving antenna system atop the Astoria Hotel. He later described it as a system of multiple Yagi antennas.

The Yagi-Uda antenna, developed in Japan in the 1920s by Shintaro Uda and Hidetsugu Yagi, is directional. Its metal elements help focus reception on a single source, making it effective for pulling in weak, distant television signals, such as KRSC-TV in Seattle, about 125 miles away.

“I found a usable signal up on the top of the Astoria Hotel,” Parsons recalled in his June 19, 1986, oral history interview.

Parsons used copper-conductor coaxial cable, amplifiers, and a community antenna to deliver the distant signal to nearby homes. Each household did not need its own antenna because the shared system received the signal and distributed it by cable.

The Minneapolis Times identified very high frequency, or VHF, channels 2, 4, 5, 7, and 9 for the Twin Cities market April 26, 1948.

KSTP-TV, Channel 5, launched the following day and became Minnesota’s first commercial television station.

From 1948 to 1952, the Federal Communications Commission, or FCC, put a pause on approving new television stations to keep up with the industry’s rapid growth.

During this time, channel assignments were shuffled, and some were reserved for educational use.

In the Twin Cities, Channel 7 was dropped from the commercial lineup, Channel 2 was designated for education, and VHF channels 4, 5, 9, and 11 were left for commercial television.

From 1952 to 1983, US ultra-high frequency, or UHF, television operated on channels 14 through 83, spanning the 470 to 890 megahertz, or MHz, band.

In the early 1980s, companies began experimenting with subscription television services delivered over UHF broadcast channels.

Subscribers needed decoder boxes because the broadcasts were scrambled.

Spectrum began broadcasting Sept. 22, 1982, on KTMA-TV, Channel 23, in the Twin Cities, offering scrambled subscription programming.

Its Spectrum Sports package included Minnesota Twins baseball and Minnesota North Stars hockey games.

In 1983, the FCC reallocated UHF channels 70 through 83 from television to land mobile radio services, including public safety, early cellular testing, and trunked radio.

A trunked radio system is a computer-controlled two-way radio network that lets many users share a small pool of radio frequencies.

Twin Cities Spectrum subscriptions peaked at 27,000 in May 1983, then fell to 13,000 by 1985 as cable competition grew and the Minnesota Twins and North Stars did not renew their sports contracts.

Spectrum’s movie service ended Sept. 29, 1985, and Spectrum shut down entirely a week later after broadcasting the Twins’ final regular-season game.

After Spectrum closed, KTMA-TV Channel 23 returned to being a free over-the-air station, so viewers no longer needed a decoder box.

In 1986, KTMA-TV switched to independent programming, showing reruns, movies, and local shows.
KTMA-TV premiered “Mystery Science Theater 3000,” or MST3K, Thanksgiving Day, Nov. 24, 1988.

KTMA-TV filed for bankruptcy in 1989, was sold, and was rebranded as KLGT in 1992.

By the late 1980s and early 1990s, cable companies began replacing many long coaxial trunk and feeder cables with fiber while keeping coaxial cable for the final connection to neighborhoods and homes.

In a traditional cable system, trunk cables carried signals over longer distances through the main network, while feeder cables branched out through neighborhoods toward customer drop lines.

These hybrid fiber-coaxial systems increased capacity and reliability, helping cable systems evolve from one-way television delivery into broadband internet, voice, and data networks.

By the late 1990s, many cable systems had become two-way networks capable of handling video, internet, and phone services.

After the 2009 digital conversion, KTMA-TV’s old analog UHF Channel 23, 524 to 530 MHz, was no longer the station’s actual broadcast frequency.

WUCW, the successor to KTMA-TV, still appeared to viewers as Channel 23, but that was only the on-screen, or virtual, channel number. Its actual digital broadcast signal was carried on UHF Channel 22, 518 to 524 MHz.

By 2026, voice and video were largely Internet Protocol, or IP, applications, sent as data packets over fiber, coaxial cable, satellite, and wireless networks alongside other digital content.

As of May of this year, channels 14 through 36, spanning 470 to 608 MHz, make up the current US UHF television range.

John Walson Sr., born John Walsonavich, died March 27, 1993, at 78.

Leroy “Ed” Parsons died May 1, 1989, at 82.

In the late 1940s, cable TV provided only a few broadcast channels.

Today, viewers have access to hundreds of channels and streaming services through modern coaxial, fiber, satellite, and internet-based networks.


An AI-assisted photographic collage by Mark Ollig illustrates the historical 
evolution of cable television, moving from early community antenna systems 
and rooftop arrays to analog sets and decoder boxes. The work, created through
OpenAI image generation, depicts the transition of the industry into modern

AI-generated collage showing the evolution of cable television from early community antenna systems.


Thursday, May 7, 2026

Technology: reliability and customer confidence

@Mark Ollig

Over the past 50-plus years, telecommunications has undergone remarkable change.

Before electronic telephone central-office switching systems were introduced, many local telephone exchanges used electromechanical relay switches that were strictly analog and loud.

The GTE-Leich (pronounced “like”) electromechanical switch I worked with provided dial tone and routed local and long-distance calls through its line finders, selector links, hundreds group connectors, and associated relay circuits.

Those components were mounted on jack-in relay bars, which were plugged into wired backplanes, connecting each unit to the main office cable wiring.

Small 30-volt Sylvania wire-lead lamps on various links served as visual indicators.

I can still see those lamps glowing and hear the relays clicking on vertical bars inside the metal-framed cabinets in the dial room.

Aside from growing up in the telephone business, my formal technical foundation began in the late 1970s with a telecommunications degree from Wadena Technical College.

When I went to work for the local telephone company, I spent much of my time in the field splicing cables, climbing poles, installing aerial cable using a process we called lashing, and running aerial and buried telephone drops to homes and businesses.

When the phone company still leased its phones, the job included installing, repairing, and maintaining telephones and related equipment, such as extension phones and ringers.

There were also payphones to maintain, along with lines to troubleshoot when customers experienced noise, a hum, or no dial tone.

The work changed from day to day, which kept things interesting.

Over the years, my role ranged from wiring and soldering to programming, calling translations, and routes, along with hardware maintenance and software upgrades.

Before cellphones, cable TV, phone service, or Voice over Internet Protocol (VoIP), people depended on their hard-wired telephone as their main lifeline.

After summer storms, we were outside repairing lines that had been knocked down by wind or damaged by falling branches and lightning strikes.

On blowing-snow, subzero winter days and nights, troubleshooting could mean climbing an icy telephone pole or digging out a snow-buried pedestal so a business, a house in town, or a country farmhouse could make and receive calls again.

By the mid-1980s, my work also involved digital switching platforms and programming call-routing translations on a keyboard and video display unit (VDU).

By 1998, while working at TDS Telecom, I was installing and maintaining T1/DS1 and Primary Rate Interface (PRI) circuits used to transport voice and data, while working with Signaling System 7 (SS7) trunk-circuit signaling for call setup and routing.

Starting the same year, the telephone company began installing Digital Subscriber Line, or DSL, internet service over existing copper telephone pairs.

Asymmetric Digital Subscriber Line, or ADSL, provided homes with faster downloads than dial-up, while High-bit-rate Digital Subscriber Line, or HDSL, provided businesses with dedicated T1-class voice and data circuits.

On-premises private branch exchange (PBX) systems have largely given way to hosted IP PBX (HPBX) and other cloud calling platforms, which provide business phone features without requiring customers to maintain their own switching hardware.

VoIP brought me into programming softswitches, short for software switches, that handled customers’ local and long-distance calling over IP-based networks.

The Metaswitch platform I worked on is a softswitch, a software-based, programmable telephone switch that carries voice calls over internet-based networks using VoIP.

Operating a softswitch also involves programming routers, managing servers, and maintaining the software that keeps the whole system working.

Before I retired, I helped decommission many legacy digital switches as softswitches were replaced by newer telephone switching equipment.

As telecommunications networks expanded beyond copper, the transport infrastructure moved from T1 digital carriers and DS3 lines running at 44.736 megabits per second to gigabit fiber-optic systems.

These fiber systems used Synchronous Optical Network, or SONET, technology to carry voice, data, and video at speeds up to 40 gigabits per second.

Later networks adopted Optical Transport Network standards to handle data-heavy traffic at 100 gigabits per second and beyond, now reaching 400 and 800 gigabits per second, and even 1.6 terabits per second.

In typical fiber-optic internet service, an Optical Line Terminal, or OLT, at the provider sends data as light pulses through glass fiber.

At the home or business, an Optical Network Terminal, or ONT, converts those light pulses for the customer’s router or Wi-Fi equipment and serves as the handoff point between the provider and customer.

Data moving across the world’s networks is now measured in exabytes each month, with one exabyte equal to 1 billion gigabytes.

These networks carry email, web pages, social media, voice calls, video streams, banking, commerce, cloud applications, business data, artificial intelligence (AI) workloads, and data-center traffic.

Despite all the technological advances, the true value of telecommunications still lies in reliability and customer confidence that the network will work when needed.


Thursday, April 30, 2026

Q7: The computer that guarded North America

@Mark Ollig

In 1955, IBM tested the XD-1, a new computer that brought large-scale, real-time computing into US air defense.

It was evaluated by IBM and MIT engineers at Lincoln Laboratory in Lexington, MA, for use in what became SAGE, the Semi-Automatic Ground Environment air-defense system.

The XD-1’s official military designation became AN/FSQ-7: AN for Army-Navy, ‘F’ for fixed installation, ‘S’ for special or combination equipment, and ‘Q’ for special-purpose equipment, with seven as the model number. It was referred to as Q7.

Q7 was built on earlier work with MIT’s Whirlwind II real-time computer and 1950 radar-to-digital data-link experiments, laying the groundwork for SAGE.

During the Cold War, the United States and Canada needed a way to spot long-range Soviet bomber warplanes coming over the Arctic.

At its peak, SAGE used hundreds of radars, 24 direction centers, and three combat centers.

Each SAGE direction center was inside a windowless, reinforced concrete building.

These blockhouses protected both the computers and the people working there from blasts and nuclear fallout.

The four-story SAGE direction center buildings were built on huge foundation slabs.

They had to support two AN/FSQ-7 computers, each weighing about 250 tons, plus massive power, cooling, and command equipment.

These computers handled radar data, showed aircraft tracks on operator screens, and helped guide interceptor jets and missiles against Soviet bombers carrying nuclear weapons into North America.

By 1958, more than 7,000 IBM employees worked on the Q7 project. This included engineers, senior managers, and technical liaisons who worked with the military on installation, operation, and maintenance.

In Minnesota, the SAGE direction center in Duluth kept watch over the northern skies.

A Minneapolis Star article Feb. 17, 1958, describes the Duluth SAGE building as a “windowless four-story concrete blockhouse.”

The building, with walls of poured concrete 18 inches thick, cost approximately $5 million.

It required the “installation of a huge powerhouse” using six diesel-powered generating units covering nearly half a city block.

The article said the units and the “intricate internal electronics to link them up to the electric brain and other equipment” cost somewhere between $15 million and $50 million.

The Duluth SAGE facility used an air-conditioned cooling system requiring 250,000 gallons of water every 24 hours.

MIT developed the magnetic-core memory for the Q7, using tiny ferromagnetic rings to store binary values (1 or 0) based on their magnetization direction.

Electrical pulses traveled through wires that passed through the rings to read and write data, which remained stored even if the power went out.

This gave the computer a fast, reliable way to store radar data obtained from across North America.

IBM magnetic tape drives and magnetic drums served as extra storage. Punched cards were used to load the first program data.

The Q7 system had more than 500,000 lines of machine-language code and could run about 75,000 instructions every second, impressive speed for the 1950s.

Technicians did regular maintenance on plug-in modules that held components such as resistors, capacitors, and vacuum tubes.

The module design made it easier for them to remove and replace parts, and the computer’s narrow corridors could be walked between for inspecting, adding to, and maintaining the computer’s wiring.

Working day and night, technicians replaced parts and fixed problems without turning off the Q7 system, making it one of the first real-time, always-on computers.

The 32-bit AN/FSQ-7 system used almost 60,000 vacuum tubes, with about 49,000 inside the computers themselves, to handle its logic operations.

All those vacuum tubes produced so much heat that the system needed industrial-scale cooling.

At each SAGE direction center, Western Electric installed six 650‑kilowatt diesel generators, providing about 3.9 megawatts of capacity to meet the complex’s roughly 3‑megawatt power demand, about as much electricity as a small town would use.

The IBM AN/FSQ-7 computer was officially accepted at McGuire Air Force Base in New Jersey June 26, 1958, becoming fully operational by July 1.

To do its job, the AN/FSQ-7 depended on a major communications breakthrough to receive data from radar sites.

Early AT&T Bell System data sets (modems) sent information at 110 bits per second, but SAGE used a special AT&T digital network that reached about 1,300 bits per second over dedicated phone lines.

Bell System engineers turned regular telephone circuits into a high-speed data network by conditioning the lines, ensuring digital signals remained clear from end to end.

Radar sites all over North America measured each detected aircraft’s distance and direction.

That information was prepared for digital transmission over conditioned AT&T circuits to the SAGE direction centers.

The AN/FSQ-7 computers processed incoming data in near real time as their software prioritized potential threats from Soviet bombers and other unknown aircraft, updating the airspace picture.

Inside SAGE, console operators sat in windowless rooms and watched blinking dots move across large, round cathode-ray tube screens, tracking aircraft in real time.

These rooms were known for their dim blue lighting, which reduced glare and allowed operators to use optical sensors called light guns.

Operators pointed these light guns at target blips on the radar screens to select specific aircraft and coordinate a response.

AT&T built a digital defense network by linking SAGE through about 25,000 telephone lines across North America.

The network relied on Western Electric for signal repeaters, high-gain carrier amplifiers, and special 102A switching systems, which helped quickly route radar and voice traffic between sites and the AN/FSQ-7 computers.

In the early 1960s, ICBMs diminished SAGE’s strategic value, but it continued to provide real-time airspace monitoring until its decommissioning in January 1984.

Today the former Duluth SAGE building, remodeled in the 1980s with windows, houses the University of Minnesota Duluth’s Natural Resources Research Institute.

On the lower left of the AN/FSQ-7 operator console, a standard mid‑1950s rotary desk telephone is mounted upright in a fixed cradle, with the dial plate directly below the handset and its coiled cord disappearing into the console.

It is a strong visual reminder that, in the end, a human voice still made the final decisions over the computer’s logic.



Friday, April 24, 2026

From slow-loading images to appearing instantly

@Mark Ollig

Do you recall in 1995, sitting at your computer and listening to the modem’s beeps, screeches, and bursts of static as it negotiated a connection with a bulletin board system or a commercial service like America Online?

I do, all while hoping no one picked up on the extension phone.

Typical modem speeds back then were around 28.8 kbps, and downloading a 1-MB file usually took about five to seven minutes.

As an image downloaded, our patience would be tested as it slowly appeared on the screen, one scan line at a time from top to bottom.

I am not sure today’s youth would have the patience to watch that on their smartphones.

Faster digital service was also available from the local telephone company over existing copper lines through Integrated Services Digital Network, or ISDN.

Using its Basic Rate Interface, or BRI, service, customers had two 64-kilobit-per-second B channels that could be bonded for data speeds up to 128 kbps.

It used a terminal adapter, or TA, rather than a standard analog modem.

At 128 kbps, downloading a 1 MB file typically took about 60 to 65 seconds.

Before websites took off, many companies operated their own in-house dial-up bulletin board systems, or BBSs, allowing users to browse products and make purchases.

Local TV and radio stations often advertised their BBS phone numbers for the public to call in, and newspapers were launching their own systems too.

Schools, universities, and city governments across the country set up BBSs to share information and connect with students, parents, and residents.

In the early 1990s in Minnesota, businesses used BBS platforms where customers could check prices, browse catalogs, and place orders before the internet became common.

Minneapolis-area TV and radio stations promoted their BBS lines for weather, sports, and schedules: a local, text-based online service that predated the World Wide Web.

I started my BBS around 1992, with the desire to become an active participant in the growing local online community scene.

At the time, I subscribed to and learned a lot from a BBS-themed magazine called “Boardwatch,” started by Jack Rickard.

I also watched the PBS program “Computer Chronicles,” hosted by Stewart Cheifet.

Many BBSs ran on a popular software program called “The Major BBS” developed by Galacticomm.

I installed this BBS software on my computer, along with six dedicated local telephone lines connected to 19.2 kbps Hayes modems.

My BBS was called “WBBS OnLine!” (Winsted Bulletin Board Service).

By the mid-1990s, national estimates suggested there were around 60,000 BBSs operating in the US.

In Minnesota alone, archived regional directories show that hundreds of dial-up BBSs were active at the time.

Their use in education was fairly limited, usually restricted to classrooms or labs with only a handful of connected computers.

An Aug. 14, 1995, article in the Willmar, MN, newspaper, West Central Tribune, described senior citizens exploring “cyber space,” with local classes helping older adults.

Many with little or no computer experience were made comfortable with computers and dial-up BBSs.

Seniors could send electronic mail over telephone lines, look up recipes and health information, and chat online with other older adults around the world.

“Just because you’re over 60 doesn’t mean you stop looking forward to tomorrow,” social worker Alice Munro Hilliard said. “Their willingness to learn is phenomenal.”

The article also noted that about 5,000 people age 50 and older nationwide subscribed to the SeniorNet bulletin board system, showing how dial-up communities were already drawing in a wide range of users.

The World Wide Web, or the web, operates over the internet. Mosaic, one of the first widely used web browsers, launched in 1993.

During the mid-1990s, most websites consisted of text and links, with graphics and images that loaded slowly, and BBSs and mainstream commercial dial-up services did not support streaming video.

By 1995, Netscape Navigator dominated the browser market, though Internet Explorer had also entered the scene.

During the 2020-21 COVID-19 pandemic, internet use became vital for work and education, highlighting how far connectivity has advanced since 1995.

Web browsers like Chrome and Safari became standard, while Microsoft shifted from Internet Explorer to Edge.

Education and commerce changed, with live-streaming lessons and global marketplaces becoming common.

Today, internet data travels over fiber-optic cables and optical transport systems, commonly operating at 100 to 400 Gbps, while carriers such as Windstream and Zayo Group have demonstrated 1 Tbps (terabits per second) transmission speeds in real-world trials.

In 2025, about six billion people were online, roughly three-quarters of the world’s population.

Many home internet connections in the United States now exceed 200 megabits per second, about 7,000 times faster than the 28.8 kbps modem connections common in the mid-1990s.

We have come a long way from the days when a single megabyte image took minutes to load, appearing one line at a time on the screen, to today, when it appears almost instantly.

Illustration showing how Google’s Nano Banana 2 turns a user’s
request into a finished 4K image through Gemini 3.1 Flash Image. 
Photo by Gemini Nano Banana 2 and Mark Ollig.
























Thursday, April 16, 2026

Minnesota’s early school connected computer network

@Mark Ollig

During the 1965-66 school year at University High School in Minneapolis, teachers installed a Model 33 ASR teletype.

The machine looked like a sturdy typewriter and produced a steady clatter as it typed onto paper.

Using its keyboard and a long-distance telephone connection, students logged in to Dartmouth College’s time‑sharing mainframe computing network in New Hampshire.

In early 1967, 18 school districts around Minneapolis and St. Paul formed a cooperative called the Minnesota School Districts Data Processing Joint Board, known as Total Information for Educational Systems (TIES).

They shared the costs and computer resources of the Sperry-Rand UNIVAC (Universal Automatic Computer) 1110 located in the St. Paul area.

A West Central Tribune article May 12, 1967, described TIES’ purpose as “establishing and conducting a data processing center for the service of said school districts and other school districts located in the state of Minnesota.”

Using standard telephone lines and acoustic modems to link teletype terminals with distant mainframes, multiple school districts established dial-up connections that allowed them to share computing power for both administrative tasks and classroom teaching.

The teletype served as both an input device and output printer, allowing students to type commands and receive responses from the remote mainframe computer on paper.

By having school districts share the costs of the Sperry-Rand UNIVAC mainframe hardware, software, and technical support, interactive computing was made available to schools that otherwise could not afford it.

By late 1967, TIES was serving more than 130,000 students.

Central to that classroom experience was the Model 33 ASR (Automatic Send-Receive) teletype terminal, made by Teletype Corporation, an AT&T subsidiary.

These electromechanical machines transmitted seven-bit ASCII (American Standard Code for Information Interchange) over telephone lines at 110 baud to remote mainframe computers.

To put it in perspective, sending data at 110 baud, about 10 characters per second or 0.00011 Mbps, was over 900,000 times slower than a typical 100 Mbps home internet connection today.

While the Model 33 ASR sent and received data as electrical signals over the telephone line to and from the remote mainframe, the teletype contained no microprocessor or modern digital logic.

Instead, it used electromechanical circuitry.

Incoming serial data was received as start-stop pulses, typically at 110 baud, and passed through a distributor mechanism that synchronized the signal with the machine’s internal motor-driven timing.

Each seven-bit ASCII character, along with start and stop bits, was decoded by a series of selector magnets that actuated mechanical linkages, cams, and code bars.

These components positioned the print mechanism to strike the correct character on paper.

The same signals could also drive the paper tape punch, where hole patterns encoded each character for storage and later playback.

A Model 33 ASR terminal cost around $1,000 at the time, equivalent to about $10,750 today.

To connect to the remote mainframe, the terminal used either a built-in call-control unit, which included a rotary telephone dial, mode selector, and line controls, or an acoustic coupler with a standard telephone handset on the line.

With the built-in call-control unit, the user placed the call directly from the teletype itself. With an acoustic coupler, the user dialed on a standard telephone, waited for the answering tones, then set the handset into the teletype’s rubber cups so the modem could send and receive data over the line.

Because telephone networks carried analog signals while computers communicated digitally, a modem converted the teletype terminal’s outgoing digital data into analog audio tones for transmission over the phone line, then demodulated incoming tones from the remote mainframe back into digital data.

The teletype didn’t have a screen; instead, it printed both the student’s input and the computer’s output on long rolls of light-yellow canary paper.

The Model 33 ASR featured a built-in eight-hole punched-tape reader and punch, which functioned as an early form of offline data storage.

As students typed, it punched one‑inch‑wide paper tape with hole patterns encoding each character, allowing them to save and later reload their programs into the mainframe without retyping them during a dial‑up session.

The 1970-71 school year saw more than 26,000 students logging into the TIES computer network.
TIES formed one of the nation’s first school-based online communities, learning and sharing BASIC (Beginners’ All-purpose Symbolic Instruction Code) and FORTRAN (Formula Translation) programs, information, and messages across school districts.

The computer network let students across Minnesota exchange messages and programs, including math games, YAHTZE (a computer version of Yahtzee), and simulations such as “Hamurabi,” “Sumer,” “Lunar Lander,” and “Star Trek.”

In 1971, Minnesota student teachers Don Rawitsch, Bill Heinemann, and Paul Dillenberger created “The Oregon Trail. “

Heinemann and Dillenberger programmed it in HP (Hewlett-Packard) Time-Shared BASIC on a Minneapolis school district minicomputer, while Rawitsch handled research and design.

In 1973, the Minnesota Legislature created the Minnesota Educational Computing Consortium (MECC) to expand computer access for students across the state.

That same year, MECC spent about $7 million (roughly $50 million today) on a UNIVAC 1110 mainframe built by Sperry Rand’s Univac division in St. Paul, installing it at 1925 Sather Street in Lauderdale, Minnesota.

This powerful mainframe computer became the heart of MECC’s groundbreaking statewide time‑sharing mainframe computing network, giving hundreds of schools remote access to educational software.

To manage long-distance toll costs, MECC used AT&T Wide Area Telephone Service, or WATS, and Foreign Exchange, or FX, lines through the telephone company.

WATS provided national inbound 800-number access to the mainframes through the early toll-free service AT&T established in May 1967.

FX lines were dedicated circuits used for in-state connections, letting a school connect its teletype to the dial tone of a remote telephone exchange where the mainframe’s number was local.

That allowed the school to call the mainframe as if it were in the same exchange, avoiding regular long-distance toll charges.

Like WATS, FX service was billed at a flat monthly rate, which was lower than standard long-distance toll-call charges.

A personal note: while working at the Winsted Telephone Company, we installed and maintained many national inbound and outbound WATS lines and state FX lines for local businesses, most of them tied to the Twin Cities’ large toll-free metro calling area.

During the 1970s, Winsted only had toll-free calling to Lester Prairie.

The Blooming Prairie Times reported April 23, 1975, that math and biology students used a teletype machine connected by modem to Austin High School, which relayed data to the UNIVAC 1110.

By 1977, the UNIVAC 1110 at 1925 Sather Street was retired due to high user demand and system latency that prevented it from meeting MECC’s performance contract.

It was replaced by the Control Data Corporation Cyber 73, which supported up to 448 simultaneous connections, more than 5,000 daily sessions, and approximately 2,000 teletype terminals across the statewide network.

In 1978, schools began shifting from remotely accessed mainframes to compact stand-alone microcomputers, allowing districts to own their machines and eliminate dial-up costs and latency.

MECC sought bids for a standard classroom computer and received proposals from Radio Shack, Commodore, and Apple.

International Business Machines (IBM) did not bid, as its personal computer would not be released until 1981.

MECC selected the Apple II and purchased 500 units for $649,000 ($3.4 million today).

The TIES cooperative, which first gave Minnesota students access to a central time‑sharing mainframe computing network, dissolved in 2018.

Today, the clatter of teletypes lingers only in memory, yet the digital pathways forged by those early telephone lines and circuitry laid the foundation for Minnesota’s first school‑connected computer network.



Thursday, April 9, 2026

Apollo 8 and Artemis II: no lifeboat

@Mark Ollig


After learning that Artemis II would proceed without an attached lunar lander, I was reminded of Apollo 8.

In 1968, I wondered what would happen if something went wrong and there was no lunar module (LM) aboard as a lifeboat.

Engineering delays with the LM led Apollo 8 to abandon planned Earth-orbit testing.

NASA sent three Apollo 8 astronauts into lunar orbit in the command and service module (CSM), without an attached LM.

Apollo Spacecraft Program manager George M. Low called the mission “audacious and not without some risk.”

With time running out to meet Kennedy’s goal and the Soviet Union advancing, NASA moved ahead.
NASA chose a free-return trajectory so if the service module engine failed on the way to the moon, the spacecraft would loop around the moon and return safely to Earth, using the moon’s gravity to swing it home.

Apollo 8 launched from Pad 39A Dec. 21, 1968, at Kennedy Space Center.

Apollo 8 astronauts Frank Borman, James Lovell Jr., and William Anders became the first people to leave Earth, reach, and orbit the moon, completing 10 orbits.

It came within 77.6 miles of the lunar surface.

When an oxygen tank exploded in the Apollo 13 service module April 13, 1970, disabling the command module, the astronauts moved into Aquarius, the lunar module, for life support and engine burns.

They later reentered the command module and used its remaining power to return to Earth.

Had that failure occurred on Apollo 8, there would have been no lifeboat.

From Apollo 10 through 17, the LM traveled attached to the CSM as backup.

After two astronauts lifted off in the ascent stage and rejoined the third astronaut in the CSM, the ascent stage was released into lunar orbit, where it later crashed onto the moon.

Apollo 10 was the exception; its ascent stage was sent into orbit around the sun.

NASA’s Artemis II mission launched April 1 from Launch Complex 39B at Kennedy Space Center, using the Space Launch System.

The SLS stands about 322 feet tall and produces more than 8.8 million pounds of thrust, compared with the Apollo-era Saturn V at 363 feet and 7.5 million pounds.

This is the first crewed Artemis flight and the first crewed mission around the moon since Apollo 17.

Commander Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen flew aboard Orion, named Integrity, without a lunar lander.

Rather than entering lunar orbit, Integrity circled the far side of the moon on a free-return path similar to Apollo 8.

The Apollo service module was almost 25 feet tall and housed fuel cells, cryogenic oxygen and hydrogen supplies, propellant tanks, and the Service Propulsion System engine.

Artemis II’s European Service Module (ESM) was built by Airbus Defense and Space under contract to the European Space Agency.

It is about 13 feet tall and houses Orion’s main engine, propellant tanks, water, oxygen, nitrogen, thermal-control hardware, and four solar array wings for onboard power.

Lockheed Martin built the Orion crew module, the capsule where astronauts live and work during the mission.

Together, the crew module and ESM made up the Orion spacecraft for most of the mission.

Apollo 8’s command module was 12 feet 10 inches wide and offered about 210 cubic feet for three astronauts.

Orion’s crew module is 16 feet 6 inches in diameter and has about 316 cubic feet of space for four.

It features a dedicated toilet and a Universal Waste Management System using airflow technology similar to the International Space Station, unlike Apollo crews, who used waste-collection bags and a handheld device.

Apollo 8 astronauts used the DSKY (display and keyboard) with the Apollo Guidance Computer, typing verb and noun codes on a panel with mechanical keys and green electroluminescent numeric readouts.

Verb told the computer what operation to perform; noun told it which data or system to act upon, with the active codes and results shown in three stacked rows of digits.

Orion’s cockpit centers on three large flat-panel glass displays, each reconfigurable for flight data, navigation, systems status, or procedures.

Surrounding the screens are switch panels with guarded and unguarded switches and rotary selectors, plus a cursor control device so astronauts can select on-screen items and still execute critical commands by hard switch if a display or controller fails.

Honeywell Aerospace provides two Vehicle Management Computers, each containing two flight modules, for a total of four.

These modules are connected via a triple-redundant Time-Triggered Gigabit Ethernet network that links sensors, propulsion, and life-support systems while isolating faults if a unit fails.

Honeywell’s navigation technology includes inertial measurement hardware made in Minnesota.

While Apollo’s CSM drew power from three fuel cells and carried backup batteries, Artemis’s Orion module uses lithium-ion batteries and solar array wings.

Orion is equipped with four 120-volt lithium-ion batteries powered by four solar array wings.

The Crew Survival System suits can keep astronauts alive for up to 144 hours in case the cabin loses pressure or becomes contaminated.

Communications advanced from Apollo’s voice transmissions and basic data over the Deep Space Network to Orion’s voice, video, navigation, and science data.

The Artemis II Orion spacecraft also carried the Orion Optical Communications System, which transmitted 4K video at up to 260 megabits per second using infrared light.

After launch, Orion entered a highly elliptical orbit around Earth that extended to about 46,000 miles above the planet.

The trans-lunar injection (TLI) command sent Integrity on a four-day trip to the moon April 2, the first TLI since Apollo 17.

Orion began close lunar observations April 6 at its nearest point to the moon during its flyby.

The spacecraft descended for landing in the Pacific Ocean using two drogue parachutes, three pilot parachutes, and three main parachutes, although it could land safely with two mains.

After splashdown, five airbags righted Orion.

Artemis III, set for 2027, will test docking operations with commercial lunar landers in low Earth orbit, similar to Apollo 9 in 1969.

Artemis IV, scheduled for 2028, will send Orion and its crew to lunar orbit, where they will dock with a lander sent ahead on an uncrewed mission.

Two astronauts will then enter the lunar lander and descend to the surface, making this the first crewed moon landing since Apollo 17 in 1972.

Unlike the Apollo moon-landing missions, the Artemis IV crew will travel 240,000 miles to the moon without a backup vehicle.

In a March 9 audit, the NASA Office of Inspector General warned that NASA lacks the means to rescue stranded astronauts in space or on the lunar surface.

NASA previously developed backup rescue plans for Skylab and the Space Shuttle, including a modified Apollo ready to save a stranded Skylab crew in 1973.

Following the Columbia disaster Feb. 1, 2003, the Columbia Accident Investigation Board concluded Shuttle Atlantis could have been launched on an emergency mission if the tile damage had been identified in time.

Since retiring the shuttle in 2011, NASA has not maintained a dedicated rapid in-space rescue capability for disabled crewed spacecraft, relying instead on each vehicle’s ability to return to Earth safely, including Orion.

Minnesota companies supply important technology for Artemis II.

Stratasys in Eden Prairie made more than 100 3D-printed parts for the Orion spacecraft using specialized thermoplastic.

Honeywell Aerospace Technologies in Plymouth supplied guidance and navigation systems, command-and-data-handling hardware, display and control units, and core flight software.

PaR Systems (Precision Automated Robotics) in Shoreview supplied friction-stir welding technology used to manufacture major Space Launch System and Orion structures for Artemis II.

The Apollo 8 command module is displayed at the Museum of Science and Industry in Chicago.



Thursday, April 2, 2026

Automation and AI maintain circuit board production

@Mark Ollig

Your vehicles, smartphones, computers, and other electronic devices depend on circuit boards filled with hundreds or even thousands of tiny electronic parts.

These parts include resistors, capacitors, diodes, transistors, integrated circuits, connectors, inductors, relays, and switches.

They also include sensors, voltage regulators, crystals, oscillators, logic gates, and memory chips.

There are more, but I think you get the idea.

Picking, placing, and monitoring those parts during production demands considerable effort that most people never see.

Every completed circuit board relies on a precise system that ensures each tiny part is placed in the exact location and verified for quality.

This week’s column centers on my son Daniel, whose full-time production facility in Minnesota manufactures industrial electronic modules.

The company is family-owned, and Daniel works side by side with his son, my grandson.

Recently, they installed an SMT HW-T8-72/80F automated pick-and-place machine for prototyping and assembling printed circuit boards, or PCBs.

Much of their work happens behind the scenes, but the products they build help keep industrial equipment, vehicles, and control systems running across the country.

The SMT HW-T8-72/80F pick-and-place machine is manufactured in China by Beijing Huawei Silkroad Electronic Technology Co. Ltd., which exports its surface-mount equipment worldwide, including to customers in the United States.

The machine measures 4 feet, 6 inches by 4 feet, 8 inches by 4 feet, 7 inches and weighs about 1,100 pounds.

It operates on 220-volt alternating current and requires compressed air to power its pneumatic vacuum and motion systems.

I watched the SMT HW-T8-72/80F in operation in a manufacturer’s demonstration video.

About the size of two vending machines, this pick-and-place system features an operator screen, rows of component tape feeders, and a fast-moving placement head that works over a circuit board.

At the center of the machine is an eight-head placement system mounted on a horizontal gantry.

The circuit board moves into the work area on a conveyor.

Once inside, cameras locate reference marks on the circuit board so the machine knows its exact position.

The placement program then directs the moving heads to put each part at the correct X-Y location.

Several pickup heads work in rapid sequence as the machine gathers tiny electronic components from reels of carrier tape and places them onto the circuit board.

Feeders advance the carrier tape in small steps so each part is presented in the correct pickup position.

The machine uses vacuum nozzles to lift the parts, while its camera system checks their position and orientation before placing them on the board.

The entire process is controlled through an operator panel with a computer monitor, where jobs are loaded, feeder positions are assigned, and machine operation is monitored.

Safety panels enclose the work area while still allowing the operator to watch component placement on the circuit board through a viewing window during production.

The SMT HW-T8-72/80F can hold up to 80 feeders and is designed to handle a wide range of small electronic parts used on today’s circuit boards.

It works with standard digital files from design software, such as bills of materials and centroid files, which provide the X-Y coordinates and rotation data needed to place parts on a circuit board.

It also uses PCB layout data derived from Gerber files.

The files are named for the Gerber Scientific Instrument Co., which developed the format to describe a circuit board’s physical layout, including copper traces, connection pads, solder mask, and printed markings used in manufacturing.

These files help tell the machine where components go and how the job should be set up.

The process begins after the solder paste has been applied to the circuit board.

The machine places the component parts onto the pasted surface before the assembly moves to reflow, where heat melts the solder and forms permanent electrical connections.

After that, it is inspected, often by automated optical inspection, or AOI, and any needed touch-ups or rework are completed before final testing.

The main advantage of the SMT HW-T8-72/80F pick-and-place machine is its ability to place parts consistently, with speed, precision, and reliability.

In practice, success relies not only on the machine but also on careful setup.

Feeder positions must be planned, reels kept readily accessible, nozzle types matched to the corresponding parts, and circuit board alignment verified before full production begins.

These steps help prevent defects such as tombstoning, where one end of a component lifts off the board; skew, where a part is crooked; and solder bridging, where solder connects two points that should remain separate.

A slow first-article run ensures proper part orientation and placement before full production, after which production speed increases.

The SMT HW-T8-72/80F can place nearly 11,000 components per hour under ideal conditions, but actual speeds are usually lower because of part size and board complexity.

Its positioning accuracy is rated at plus or minus 0.0004 inch, and it can handle parts as small as 0201 components, roughly 0.008 by 0.004 inch, as well as larger parts used in industrial electronics.

To guide component placement, the machine uses an eight-camera vision system to locate the board’s reference points, while vacuum nozzles pick up and place each component.

With the rise of automated board assembly, traceability became crucial because speed is only valuable if the system can track the parts used and their correct placement.

Daniel developed and implemented two in-house artificial intelligence software systems that run entirely on local hardware, with no cloud connectivity.

The first is an automated optical inspection system, or AOI, that uses a high-resolution camera to capture images of assembled circuit boards and compare them with known-good references.

Its software then uses a neural network and other AI tools to identify missing parts, misaligned components, and solder defects.

Circuit boards flagged by the AI system are sent to a human operator for review and confirmation.

The second AI system manages inventory and production.

It scans components as they arrive, capturing part numbers, quantities, lot codes, date codes, and other details from standardized 2D barcodes to reduce manual-entry errors and improve traceability.

Each reel is tracked individually, including partial reels, so the system always knows exactly what is available for production.

When a component barcode is damaged, AI-assisted fallback tools can restore the missing data, maintaining full traceability of each component from receipt through storage to production.

Additionally, these AI tools verify the availability of required parts before a job begins and identify potential shortages to prevent production delays.

Both AI systems operate fully on-site, so there are no cloud data transfers or subscription fees.

These systems ensure that every completed circuit board and module meets stringent quality and reliability standards.

The pick-and-place machine assembles the boards, while Daniel’s production team uses in-house AI and separate computing systems to monitor inventory, maintain detailed records, and oversee production operations.

Behind much of today’s advanced technology are skilled people like my son and grandson, who help produce the customized circuit boards, electronic control modules, and other electronic assemblies industry relies on every day.


Thursday, March 26, 2026

From BBS chat to today’s instant messaging

@Mark Ollig

Before the World Wide Web became part of everyday life, many computer enthusiasts, including this humble columnist, operated a computer bulletin board system, or BBS.

A BBS uses a software program running on a computer connected to one or more dial-up modems plugged into telephone lines.

Users called the BBS’s telephone number using a communications program such as ProComm on their modem-equipped computers, then logged in over a dial-up telephone connection.

Once connected, they could read messages, exchange files and emails, or engage in real-time chat with other users on the BBS.

In 1992, I launched my own hobbyist bulletin board system, WBBS Online, using The Major BBS, a popular “gold standard” BBS software platform developed by Galacticomm.

WBBS stood for Winsted Bulletin Board System.

An electronic handshake between the callers’ and the BBS modems produced audible squeals, screeches, and tones as the two negotiated a data rate, sometimes reaching a cutting-edge 19.2 kbps.

In 1992, hitting 19.2 kbps was like breaking the sound barrier compared to the standard 2400 or 9600 baud modems.

Regular BBS users logged in every day, shared local news, traded software, discussed hobbies, and sometimes texted late into the night via real-time chat.

WBBS users in Winsted and Lester Prairie could dial in without long-distance charges because the two towns had toll-free calling between them.

On busy evenings, hearing the modem answer meant another local caller had connected and joined the conversation.

The technology has changed, but the camaraderie of participating in the virtual community did not.

Today, I regularly text chat with my kids (they are now middle-aged adults) and my siblings (we are forever young) through a Google Messages group instant-messaging (IM) thread on our smartphones.

Instant messages arrive seamlessly over wireless connections, with no dial-up modems, no phone lines, no busy signals, and no waiting.

IM is similar in spirit to the BBS message boards and real-time chat we used over dial-up telephone lines.

AOL Instant Messenger (AIM) came out in 1997 and introduced millions of people to real-time text chatting over the internet.

BlackBerry Messenger, known as BBM, launched in 2005 exclusively for BlackBerry cellphones, letting users chat in real time instead of sending standard text messages.

BBM led people to buy BlackBerry phones just to access the service, but BBM was eventually opened to iOS and Android cellphone platforms in 2013.

By then, WhatsApp and iMessage had taken over the market, and BBM shut down May 31, 2019.

These instant-messaging services felt natural to early users because features such as contact lists, screen names, and visible online status were already familiar concepts.

They first experienced them in a slower form on dial-up bulletin board systems, and later saw them arrive in real time on computers and cellphones.

Computer users accessing WBBS would check in regularly, respond to posts, and join chats with others logged in at the same time.

They shared typed messages and simple images called ASCII art, which used letters, numbers and keyboard symbols to form pictures.

Photos were downloaded as image files, often in “.jpg” format, and could take several minutes to fully appear on screen.

Many of us might remember watching a photo load slowly, line by line, from top to bottom, as the dial-up connection transferred the data.

Today’s instant messaging friends and family group chats can fill quickly with updates, videos, and photos.

Apps have replaced BBS dial-up software, and fiber-optic broadband and cellular data networks have replaced the slow modem connections that once tied up a household’s only phone line.

Yet the core behavior remains: people still gather in online digital spaces to share messages and stay connected.

For those who were around back then, a BBS was likely the first taste of online communication.

That early experience paved the way for social media and messaging platforms.

I can still clearly remember the sound of a modem handshaking late at night, letting me know another computer user was connecting to WBBS Online.

It meant someone else was out there, sitting at their keyboard and joining our small virtual community.

Today’s instant messaging is simply a version of the local virtual communities we started decades ago in dial-up BBS chat rooms.





















Thursday, March 19, 2026

The evolution of Google’s ‘Nano Banana’

@Mark Ollig

Nano Banana 2 became Google’s latest artificial intelligence (AI) image creation and editing tool Feb. 26 of this year, powered by the Gemini system.

Nano Banana 2 allows users to create visuals quickly using Gemini 3.1 Flash Image technology.

The system quickly became available across Google services, allowing users to generate high-quality images in seconds.

The Gemini app and Google Messages now provide access to the technology. Developers and businesses can explore the system through Google AI Studio and Vertex AI.

Since its release, Nano Banana 2 has spread quickly across Google’s ecosystem, producing high-quality images through a wide range of applications.

The name “Nano Banana” has an unusual origin story, one that stands out in the world of technology branding.

Naina Raisinghani, a Google DeepMind product manager, created the placeholder name by combining her nicknames “Nano” and “Banana.”

She submitted the model anonymously to LM Arena (Large Model Arena), a benchmarking site to evaluate artificial intelligence systems.

When the model quickly climbed to the top of the leaderboard, the name went viral.

Google ultimately decided to keep the unusual name and adopted the banana emoji [🍌 ] as the feature’s signature icon.

Despite the playful branding, Nano Banana 2 represents a serious effort to make advanced digital creation tools widely available to everyone.

Google’s goal is straightforward: make AI a practical, creative partner capable of producing clear, lifelike images while maintaining visual consistency across characters and objects.

You provide the idea, and the AI handles the technical execution.

Today’s accompanying illustration provides a roadmap through the complex inner workings of the Gemini 3.1 Flash Image system, breaking down the advanced technology into four digestible stages:

Zone one, human intention – This is the starting point where you provide the “vision” for your project, using either a voice command or a typed request to describe the image you want to create.

Zone two, the core processing engine – Often called the “engine room,” this is where the Gemini 3.1 Flash technology analyzes your instructions and builds the high-definition image.

Zone three: content integrity and provenance – In this stage, the system verifies real-world details through search grounding and embeds a SynthID digital watermark to identify the image as AI-generated.

Zone four: output and visualization – The final result is a polished 4K graphic that is automatically synced to your devices and professional tools like Google Workspace.

To bring these visions to life, Google uses its Veo video model to animate the high-resolution images generated by Nano Banana 2, turning a static 4K design into a cinematic sequence with realistic motion and lighting.

Unlike earlier Gemini tools that required long keyword lists, Nano Banana 2 allows users to describe their request in natural language.

Instead of writing complex prompts, users simply describe a scene to Nano Banana 2 as they might to a human designer.

The system interprets the request, generates a draft image, and allows the user to refine details through a conversational process.

For example, a user designing a new kitchen might say, “Change the style to mid-century modern,” and the system updates only those elements while preserving the rest of the composition.

A major challenge with earlier AI image tools involved visual continuity.

Characters or objects would often change appearance between frames, something I experienced that led to brief episodes of frustration.

But I digress.

Nano Banana 2 addresses this issue by tracking up to five characters and 14 individual objects across a sequence of images.

This capability makes the system useful for storyboards, advertising campaigns, and illustrated narratives that require consistent character identity.

Another update addresses the ongoing issue with AI-generated text.

Nano Banana 2 treats typography as structured data rather than just visual pixels. This lets the system create clear, readable text in many languages.

Users can put phrases in quotation marks to make posters, diagrams, or signs with correct spelling.

One of the major advances in the 2026 release is multimodal operation, meaning the system can interpret both images and text within the same reasoning framework.

This allows more realistic image-to-image editing.

For example, a user can upload a photograph of a kitchen and instruct the system to “render this in a midcentury modern style while keeping the cabinet layout unchanged.”

The model adjusts the visual style while preserving the room’s physical structure.

Nano Banana is also compatible with Google Lens. Users can tap the Nano Banana button to view and interpret their environment using a smartphone camera.

The result functions as a mobile design assistant capable of visualizing renovations, décor changes, or clothing variations in augmented reality (AR).

Another important step forward involves resolution.

Nano Banana 2 now supports 4K (4,096-pixel ultra-high-definition) image output, allowing concept images generated in seconds to become print-quality graphics.

Google has integrated these capabilities into Google Workspace and Google Ads.

This allows marketing teams to maintain brand consistency across large collections of AI-generated images.

The approach helps Google compete with other generative-image platforms such as Adobe Firefly and Midjourney.

To improve factual accuracy, Nano Banana 2 also incorporates search grounding.

When users request a specific landmark or brand, the system consults Google Search to ensure the rendering reflects real-world information.

The process for how an idea becomes a finished digital image begins with a user request, which may involve uploading a photograph or entering a written description.

That request is sent to the Gemini 3.1 processing system, where the model analyzes the instructions and constructs the image.

During generation, the system may query Google Search to verify the visual accuracy of real-world objects.

Before delivery, the finished image receives a SynthID watermark.

SynthID, created by Google DeepMind, is a hidden digital watermark that embeds discrete signals into images, enabling systems to detect AI-generated content.

Nano Banana 2 also supports C2PA (Coalition for Content Provenance and Authenticity) credentials.
C2PA provides a verifiable digital record showing how an image was created without altering its visible appearance.

The final result is a high-resolution image ready for tablet, smartphone, or workstation displays, as well as for advertising presentations or printed media.

Nano Banana 2 streamlines connection to creative tools, letting users focus on their artistic vision. An innovative idea quickly turns into a finished design.

I have used Nano Banana 2 to create illustrations for my columns and have been pleasantly surprised by its capabilities.

Visit https://gemini.google/overview/image-generation/ to learn more and to try Nano Banana.