Tweet This! :)

Friday, April 26, 2024

Sipping coffee and reminiscing

© Mark Ollig 


Sony announced its first PlayStation game console May 11, 1995, at the Electronic Entertainment Expo in Los Angeles, CA.

By August of that year, I was in an electronics showroom looking for the latest state-of-the-art notebook computer.

People were checking out Sony’s Watchman FD-210, Sega Saturn, Discman, and the Panasonic Shockwave Portable CD Player.

Many shelves were also lined with computers.

In 1995, I made it a weekly ritual to watch “Computer Chronicles,” a television program hosted by Stewart Cheifet since 1983.

This program served as a gateway to the constantly advancing world of personal computing technology by highlighting the latest innovations.

The show aired during the peak of the personal computing revolution, which many of us were a part of.
Cheifet’s presentations inspired me to learn more about the latest computing technology.

I remember checking out the notebooks on display in the computer showroom, such as the IBM Toshiba Satellite, Apple PowerBook, and Hewlett-Packard OmniBook.

While I was browsing, a salesman approached and asked, “Can I assist you in finding anything?”
“I’m looking to buy a portable notebook computer,” I replied.

The salesman smiled and asked for more details; I could tell he anticipated a big sale.

“I’ll need the Windows 95 operating system, Microsoft Office 95, communication software, a web browser, and a few games,” I told him.

The Microsoft Office 95 bundle was a must-have, as it included Word, Excel, PowerPoint, and Outlook email.

After the salesman showed me some of the notebook computers, I decided on the Hewlett-Packard (HP) OmniBook 4000CT, which measured 11.6 by 8.9 by 1.93 inches and weighed 6.7 pounds.

It included the Intel 100 MHz 486DX4 processor, a full keyboard, a 3.5-inch floppy drive, 16 MB of RAM, and a 520 MB hard drive.

There was a 32 MB RAM and an 810 MB hard drive option, but I thought, “Who needs that much?”

At the time, 520MB was substantial, offering more than enough space to store my operating system, office, and utility software, Netscape Navigator web browser, ProComm communications program, documents, photos, and audio files.

I can imagine young people reading this column on their smartphones, smiling at my being impressed with 520 MB of disk storage while sipping their espressos.

The OmniBook had a 10.4-inch diagonal Thin-Film Transistor (TFT) active-matrix display with a vibrant 640 by 480 resolution that was quite impressive when compared to the older laptop screens I had seen. The colors popped; the text was razor-sharp.

It also contained a rechargeable nickel-metal hydride battery that could keep it powered for up to three hours.

The cost of my new notebook computer and software was slightly more than $3,000 (equivalent to $6,100 this year).

Since I am retired with more time on my hands, I decided to test NASA’s 15-pound., 8-by-8-by-7-inch Apollo Guidance Computer (AGC) specifically developed for the Apollo missions to the moon, against my OmniBook.

I ran the technical specs of both the OmniBook 4000CT and the AGC through an AI program.

The specialized design of AGC was well-suited for real-time data processing, enabling split-second calculations and adjustments for navigation and control.

Its core rope memory, a form of non-volatile storage, made it remarkably reliable.

The AGC had built-in redundancy, fault-tolerant code, and error detection features, along with software programs intermixed with the Apollo spacecraft guidance systems.

The AGC “Colossus” fixed memory software system processed substantial amounts of mission data and calculated space flight and orbital trajectory information for the Apollo astronauts.

The “Luminary 1-D” also a fixed memory AGC software, calculated the maneuvers, managed navigation and control, and provided crucial information to guide the Apollo lunar module during its descent and landing on the moon.

Despite the resources of my OmniBook 4000CT computer, it could not outperform the overall abilities of NASA’s AGC.

Even with its 1960s components – a 2.048 MHz clock, 2048 words of magnetic-core RAM, magnetic-core rope ROM, integrated circuits, and specialized software – the AI program concluded that the AGC would outpace my OmniBook in calculating trajectories for a moon trip.

The Apollo Guidance Computer’s primary input and output interface used a display and keyboard unit known as a DSKY (pronounced diskey).

The DSKY screen included a 21-digit display and a 19-button keyboard.

It used a special command language which included two-digit numbers for programs, verb and noun codes, and five-digit numbers represented specific data like location or speed.

Each Apollo Command Module had one AGC and two DSKYs, which, during the early 1960s, resourceful MIT engineers designed, and the Raytheon Company built.

The Apollo Guidance Computer played a vital role in the moon missions, but its success was supported by powerful IBM System/360 mainframe computers.

Located in the Real-Time Computer Complex (RTCC) at the mission control center in Houston, TX, these computers handled complicated calculations and “mission-critical” data.

Sipping my coffee and reminiscing about my old OmniBook made me realize that much of the journey has been more than just comparing megabytes and processors.

The Apollo Guidance Computer
and the DSKY
My Apollo display and keyboard unit aka DSKY
Picture in the HP manual of my
OmniBook 4000CT 



Friday, April 19, 2024

The space shuttle Enterprise

© Mark Ollig  


On Nov. 28, 1968, the Orlando Sentinel newspaper led with the headline: ‘NASA Engineers Study Space Shuttle Plans.’

“The next major thrust in space may be the development of an economical launch vehicle for shuttling between Earth and installations such as space stations in orbit,” the article began.

NASA aimed to develop a reusable spacecraft that could cost-effectively carry up to eight astronauts to and from Earth orbit.

President Richard Nixon approved the space shuttle program in January 1972.

Between 1972 and 1976, the space shuttle design underwent extensive modification and testing of its heat-resistant tile system, reusable rocket boosters, and a complex computer guidance system for navigation and control.

In 1976, hundreds of thousands of letters from “Star Trek” fans (including me) were sent to then President Gerald Ford, asking him to name the prototype space shuttle “Enterprise,” in a grassroots effort to pay homage to the iconic spaceship from the television series.

It appeared President Ford shared our enthusiasm for the name.

On Sept. 8, 1976, at the White House, President Ford recommended the shuttle be named Enterprise during a meeting with NASA Administrator James C. Fletcher, saying, “It is a distinguished name in American naval history, with a long tradition of courage and endurance.”

Ford went on, “It is also a name familiar to millions of faithful followers of the science fiction television program Star Trek. To explore the frontiers of space, there is no better ship than the space shuttle, and no better name for that ship than the Enterprise.”

NASA had initially chosen the name Constitution for the prototype shuttle.

The new Enterprise shuttle orbiter was exhibited during a public ceremony Sept. 17, 1976, at its manufacturing plant in Palmdale, CA, a suburb of Los Angeles.

Star Trek creator Gene Roddenberry, along with most of the television series’ original cast members, attended the ceremony.

On Aug. 12, 1977, the space shuttle Enterprise, attached atop a modified Boeing 747, took off from California’s Edwards Air Force Base.

While traveling 322 mph at an altitude of 26,400 feet, explosive bolts severed the three mounting struts attaching Enterprise to the 747, releasing the shuttle.

Upon separation, the two onboard shuttle astronauts, Fred Haise and Gordon Fullerton, piloted the Enterprise like a glider.

The flight lasted five minutes and twenty-one seconds, ending with a safe landing on the seven-mile-long dry lakebed runway at Edwards Air Force Base as 40,000 people looked on.

While not designed for spaceflight, Enterprise was crucial in validating the shuttle’s aerodynamics and landing capabilities.

It was equipped with state-of-the-art navigation and control systems for this testing.

I can still vividly recall watching the live broadcast of the Enterprise’s test flight.

On April 12, 1981, the first orbital test flight of the space shuttle Columbia, designated Space Transportation System One (STS-1), launched from Complex 39A at Kennedy Space Center in Florida.

Astronauts John W. Young and Robert L. Crippen crewed this historic first flight.

At T-minus four seconds, Columbia’s three main engines ignited, and with a final computer check, the two solid rocket boosters roared to life.

“Liftoff! Liftoff of America’s first space shuttle . . . and the shuttle has cleared the tower.” said NASA’s Hugh Harris, providing launch commentary.

The space shuttle’s solid rocket boosters and main engines combined to generate more than 6.8 million pounds of thrust.

This immense power lifted the 4.5-million-pound launch weight, which was comprised of the external tank, solid rocket boosters, and the orbiter itself, which weighed nearly 109.7 tons.

The solid rocket boosters (SRBs) on either side of the space shuttle’s external fuel tank were the primary source of this thrust. Together, the two SRBs generated a combined 5.6 million pounds of thrust.

Additionally, Columbia’s three main liquid-fuel cryogenic RS-25 rocket engines, which burned liquid hydrogen and oxygen, produced around 1.2 million pounds of thrust.

Approximately 8.5 minutes after launch, Columbia achieved Earth orbit.

The astronauts tested onboard systems, including opening and closing the shuttle payload bay doors.

During their 37 orbits around the Earth, they operated the shuttle’s payload bay Canadarm, a Canadian-built robotic arm used to maneuver objects in space.

The space shuttle’s thermal protection system consisted of approximately 25,000 high-temperature reusable surface insulation silica ceramic fiber tiles, protecting it from temperatures up to 3,000 degrees Fahrenheit during re-entry.

Honeywell Inc., headquartered in Minneapolis, was responsible for developing flight controls, computer systems, and other technologies used in the space shuttle.

On April 14, 1981, Columbia returned to Earth, landing at Edwards Air Force Base in California after completing their 54-and-a-half-hour mission.

After the Challenger disaster in 1986, NASA considered using the Enterprise as a replacement.

However, due to cost, time, and design improvements, they instead chose to build a new space shuttle named Endeavour.

The Atlantis orbiter completed the final mission of the space shuttle program, STS-135, when it was launched July 8, 2011, and landed July 21.

NASA’s Enterprise shuttle test flight can be seen at: tinyurl.com/Enterprise1977.

Today, the Enterprise is on display at the Intrepid Sea, Air, and Space Museum in New York City: tinyurl.com/1977Enterprise.
My space shuttle I put together and painted Aug. 15, 1982.


My space shuttle Atlantis (orbiter vehicle designation: OV‑104)
with a swatch of its space-flown cargo bay liner.






Friday, April 12, 2024

The personal computer stepping stone

© Mark Ollig  


In 1976, Sol-20 revolutionized personal computing with its self-contained design, making computing accessible to everyone.

While small microcomputers such as the build-it-yourself Altair 8800 were built with switches and blinking lights, a new fully-assembled model with a built-in keyboard and attachable display monitor marked a dynamic shift toward user-friendly home computers.

Lee Felsenstein designed the Sol-20 microcomputer with the help of Bob Marsh and Gordon French.

They were all important figures at the Processor Technology Corporation and members of the Homebrew Computer Club in Menlo Park, CA, where Felsenstein served as the acting president.

The Homebrew Computer Club was active from 1975 to 1986 and included notable members Steve Jobs and Steve Wozniak, co-founders of Apple, and Bill Gates, co-founder of Microsoft.

The Sol-20 was a personal, independently operating microcomputer, a departure from the era of remote data terminals connected to large mainframe computers like the IBM System/370.

At the 1976 Personal Computing Show in Atlantic City, NJ, the Sol-20 received positive feedback and garnered much attention.

Popular Electronics magazine’s July 1976 cover featured the Sol-20 computer and dubbed it as a “highly intelligent terminal.”

Processor Technology, based in Emeryville, CA, manufactured and sold the Sol-20.

This computer featured a sleek blue metal case, optional walnut side panels, a full-sized keyboard, a power supply, and a cooling fan.

The Sol-20 could be purchased either as a kit for $995 or fully assembled with a monitor for $1,495.

Under the computer’s hood, there was an 8-bit Intel 8080 microprocessor with a clock speed of 2 MHz.

It also included a S-100 bus with five expansion slots (a popular interface used with microcomputers), along with serial, parallel, and cassette ports.

A model called the Sol-10 was available without the S-100.

The Sol-20 reportedly shipped with base configurations ranging from 8 KB to 48 KB of RAM.

While these RAM numbers changed throughout its production run, even a minimal configuration, such as 1 KB to 2 KB, was significant in the late 1970s.

The Sol-20 computer could be expanded to a whopping 64 KB via S-100 printed circuit boards.

The S-100 bus expansion allowed users to add memory, graphics, floppy disk drives, printers, and modems.

A few years earlier, Lee Felsenstein designed the PennyWhistle 103, which was one of the first modems designed for computer hobbyists.

It was a 1200-baud acoustic coupler modem that could connect to other computers or community dial-up computer bulletin board services.

The PennyWhistle 103 transmitted and received data over telephone lines using a standard telephone handset and was priced at $109.95 plus $2.50 for postage and handling.

Before affordable hard drives and modern operating systems, early Sol-20 versions relied on non-volatile memory (NVM) read-only memory (ROM) plug-in modules.

NVM ROM is a type of computer memory that stores data permanently on a chip using binary code containing firmware or software that the user can’t modify.

These modules retain data permanently, even when the power is off, unlike volatile memory.

They also contain programming that initiates a computer’s startup process every time it is turned on (analogous to a boot-up).

These modules provided necessary instructions for starting the computer and controlling the keyboard, display, and cassette interface.

The Sol-20 computer also relied on cassette tapes for program input and data storage.

It utilized the Kansas City Standard (KCS) for encoding, a design intended to store digital data on inexpensive cassette tapes for early microcomputers.

People connected a regular cassette tape recorder to the computer’s cassette port to save data and load programs.

The Sol-20 computer’s five S-100 bus slots were used for expansion options like memory, graphics, audio, storage and memory devices, and printers.

Storage formats included eight-inch disk floppies and the smaller 5.25-inch minifloppies, as they were called then.

One popular peripheral expansion option for the Sol-20 was the Helios II Disk Memory System, which features dual eight-inch drives.

The Helios II Disk Memory System typically uses single-sided, double-density (SSDD) eight-inch diskettes, with each holding an average of 384 KB of data.

The Sol-20 computer’s cassette interface supported both the KCS 300-baud rate and the Computer Users Tape Standard (CUTS), with its optional 1200-baud mode (note: in this instance, the baud rate is equivalent to bits per second).

Games for the Sol-20 included a race-driving game, the action game Target, backgammon, Trek-80 (a text-based space adventure inspired by Star Trek), and GAMEPAC 1, an arcade compilation featuring Pong, chess, and checkers.

Users would input programs into the Sol-20 by loading it from pre-coded cassette tapes, purchasing commercial modules, or manually typing in programming code usually found in computing magazines.

Although nearly 12,000 Sol-20 computers were sold from 1976 to 1979, Processor Technology ended its production in May 1979 due to increased competition in the rapidly-evolving computer industry.

The Sol-20 microcomputer model served as a stepping-stone for the new personal computing enthusiasts, programmers, and engineers.

The Sol-20 microcomputer (PC) from 1976.



Friday, April 5, 2024

Text-to-video: OpenAI’s Sora

© Mark Ollig  


OpenAI, a US-based research organization, has developed Sora, a text-to-video technology.

The name Sora means ‘sky’ in Japanese, hinting at its “the sky is the limit” possibilities.

Sora is an advanced program that uses algorithms and extensive training data to transform written text into high-quality videos.

Its technology allows you to generate professional-grade videos with multiple moving characters and diverse visual styles simply by writing your statements.

Sora can even take still images and transform them into videos, extend existing videos, and fill in missing film segments.

Frames from an AI-generated video based off of Mark Ollig programming his ZX81 Sinclair computer using BASIC.

OpenAI’s Sora software was released in February of this year to cybersecurity professionals known as “red teamers.”

The software will undergo susceptibility testing to address any vulnerabilities that malware or hackers could exploit.

OpenAI is still improving Sora’s performance along with coding to prevent the creation of unethical video content.

After viewing some of the Sora AI-generated videos available on their website, I came away impressed by their realism. The movements, lighting, and textures were strikingly lifelike.

I installed the Sora app (last updated March 27, 2024) and made two AI videos, one from my text description and the other from a 1982 photo of me with added movements based on brief text input.

In addition to Sora, OpenAI has developed a suite of well-known AI applications.

These include GPT-4, a powerful language AI assistant model, and ChatGPT, a resourceful AI interactive chatbot.

OpenAI also created DALL-E, an AI system that generates realistic images and artwork from simple text descriptions.

The name playfully combines artist Salvador Dalí and Pixar’s animated robot Wall-E.

Another notable OpenAI application is Codex, which can translate natural language instructions into computer code.

As an AI-based tool, Sora generates videos from text input by analyzing its vast amounts of stored data.

Sora has various potential applications across different fields.

For instance, educators can use it to enhance their lessons with interactive simulations, which can assist students in comprehending intricate concepts.

Marketers can also leverage Sora’s capabilities to create visually appealing and engaging campaigns that can grab the attention of their target audience.

Designers of various specialties, including product, UX/UI (user experience/user interface) graphics, and motion graphics.

Sora can also be used to create videos that complement music.

Fashion and interior designers can use Sora’s high level of accuracy to create precise prototypes, which can help them refine their designs more efficiently.

Sora’s ability to generate videos in various styles – from photorealistic to imaginative animation – opens up a world of creative possibilities.

In the 1970s, the telecommunications industry used specialized programming languages like Protel (Procedure Oriented Type Enforcing Language) to manage their complex systems.

Developed by Bell-Northern Research, Protel drew inspiration from structured programming language models like PASCAL, created by Niklaus Wirth in 1970.

PASCAL’s emphasis on readable code and well-defined data structures made it a powerful tool for building reliable software for both educational and industry settings.

My experience with Protel dates back to 1986 when I worked at Winsted Telephone Company.

I used a text-based command-line interpreter to program and maintain a Nortel DMS-10 digital voice-switching platform using Protel. I would later use it with the larger DMS-100, 250, and 500 switches.

PASCAL is named after Blaise Pascal, a 17th-century mathematician.

Although PASCAL’s influence is noticeable in modern programming design principles, AI systems like Sora demand more specialized tools.

Today’s AI developers use programming languages like Python, a versatile and widely used language to analyze complex data and build intelligent systems that can learn from that data.

Sora’s core modeling software is proprietary, but it leverages powerful programming languages and AI frameworks for efficiency and adaptability.

It reportedly uses C++, a high-performance language, to optimize video generation speeds, and Python.

Additionally, Sora employs deep learning framework technologies like PyTorch or TensorFlow.

These and other high-level programming technologies are used for building and deploying complex neural networks used in AI applications.

Sora utilizes a transformer architecture known for its versatility in tasks like language processing and image generation, which allows it to excel in video creation, demonstrating capabilities beyond its original design.

Early AI video-generating systems faced challenges in accurately depicting spatial details, generating realistic interactions, and fully understanding cause and effect within scenes.

For example, in an AI-generated video, a person might take a bite of a cookie, yet the cookie would remain whole afterward.

This simple error highlights the AI model’s limitation in understanding object permanence.

Sora’s ability to track and understand object states (object consistency) is a notable step towards human-like reasoning for AI systems.

Its ability to generate highly detailed characters and landscapes and its natural language comprehension are impressive.

This type of technology is another step into AI’s future potential to better understand and interact with us and our increasingly complex world.

To learn about and see some Sora AI-generated videos, visit (openai.com/sora).

Its AI-generated technical details can be seen at (tinyurl.com/SoraTechnical).


(Below is the AI-generated video of me taken in 1982)

Click twice to play!

Photo of me from 1982 that the
 AI generated the video from


Below is the AI-generated video based on my text description of a man in his mid-60s with a white beard sitting at a table in a coffee shop reading the newspaper.

Below is a AI-Generated video I created from a 1976 photo I took of my Dad proudly maneuvering his Kayot pontoon on Gull Lake, near Brainerd, MN. Dad loved having all of us on the pontoon, and we had access to seven sperate lakes. Many happy times and good memories for me. 

Below is the photo I took:





Thursday, March 28, 2024

Protecting the US during the Cold War

© Mark Ollig 


The 1962 Cuban Missile Crisis demonstrated our nation’s strength and defense capabilities in preventing a nuclear conflict.

During this time, the Duluth SAGE Direction Center (DC-10) to the northeast of us was on high alert.

SAGE (Semi-Automatic Ground Environment) was a computerized defense system developed in the 1950s by the US military and MIT’s Lincoln Laboratory.

It played a crucial role in protecting the US and Canada against the threat of Soviet bomber attacks.

SAGE combined advanced radar and computer technology to detect hostile aircraft and coordinate countermeasures with guided missile systems like the Boeing CIM-10 Bomarc – a supersonic long-range surface-to-air missile.

SAGE did not launch the missiles autonomously; human authorization was required to ensure safety and control.

On Feb. 17, 1958, the Minneapolis Star newspaper reported on the construction of the Duluth SAGE Direction Center.

It wrote of its immense scale and cost, saying “its intricate system of computers leading up to the electric brain” required “six diesel-powered generators covering nearly half a square block” and a massive, air-conditioned water-cooling system using 250,000 gallons of water every 24 hours.

In 1958, the total cost of the Duluth SAGE Direction Center was estimated to be several times the $5 million used to construct its fortified, four-story windowless concrete structure with 10-inch-thick walls.

Its massive size is evident in the fact that it enclosed 3.5 acres of floor space.
Today, its structure costs alone would be approximately $48.5 million.

Duluth’s SAGE Center was strategically important as it was along potential Soviet bomber routes, reinforcing its significance within the US air defense network.

On Nov. 15, 1959, the Duluth SAGE Direction Center (DC-10) began operations.

Duluth’s SAGE facility processes relied upon two large IBM-built “combat direction central” digital computing systems: AN/FSQ-7, which stands for Army-Navy/Fixed Special Equipment (aka Q7) – the electric brain.

The Q7 system was based on the Whirlwind I computer developed at MIT in 1951 for the US Navy.

Lincoln Laboratory documented Q7’s software with flow charts, test specifications, manuals, and coding specs.

Both SAGE Q7 computers, working in active and hot standby mode, weighed around 250 tons.

The computers contained over 500,000 lines of software programming code, up to 60,000 vacuum tubes, thousands of electronic components, and miles of wiring.

The Q7, capable of executing 75,000 instructions per second, used magnetic tape core memory, magnetic drums as secondary storage, and magnetic tape drives for data archiving.

The computers included various input and output devices, such as teletype machines, printers, punch card readers, and light-sensing pens.

Military personnel interacted with the Q7 using different consoles, including the Long-Range Identification (LRI) Monitor Console, designed to detect and identify potential airborne threats and track commercial aircraft.

Due to the equipment, components, and circuitry, the Q7 required a 3,000-kilowatt (3-megawatt) power supply.

The Q7 worked in tandem with NORAD and other SAGE locations in providing real-time air traffic surveillance and air defense coordination.

SAGE console military operators used light-sensing pens to target the Q7-identified hostile targets on a round situation display scope on a system named AN/FSQ-8, which monitored the status of an entire sector.

After confirming the target, the Q7 transmitted its attack coordinates to a missile launch site installation.

This data transmission traveled over a secure high-speed SAGE telephone network, using redundancy and encryption protocol features.

Built deep underground, the North Bay SAGE Direction Center (DC-31) was located in North Bay, Ontario, Canada.

Minnesota’s Duluth SAGE location was also flanked by direction centers in North Dakota, Iowa, and Wisconsin, forming a solid northern tier line of defense against invasions of North American airspace.

On Thursday, Oct. 24, 1962, the US Strategic Air Command (SAC) was placed on DEFCON 2, the defense readiness condition indicating a high probability of a military conflict.

DEFCON 2 was initiated after learning the Soviet Union might launch nuclear missiles from Cuba.

The US protected its long-range strategic bombers by placing them on high alert and relocating those stationed in the southeast away from potential missile attacks from Cuba.

Several long-range turbojet Boeing B-47 Stratojet aircraft were reportedly relocated to Duluth and prepared for possible retaliatory strikes against the Soviet Union.

Fortunately, the Cuban Missile Crisis ended without a nuclear exchange.

The rapidly evolving technological advancements and increased capabilities of Soviet intercontinental ballistic missiles (ICBMs) during the 1960s and 1970s led to the decommissioning of the SAGE infrastructure.

In 1983, the Joint Surveillance System introduced a specialized land-based computing system called the AN/FYQ-93, used by the Army and Navy defense systems.

The AN/FYQ-93 is a computing system used for coordinating, surveilling, and communicating between the defense systems of the US and Canada.

In 1983, the Duluth SAGE Direction Center (DC-10) ceased operations.

In August of the same year, the General Services Administration approved the transfer of its empty building to the University of Minnesota.

In 1985, with $3.9 million in funding from the Minnesota State Legislature, the former Duluth SAGE Direction Center DC-10 facility was converted into the Natural Resources Research Institute by the university.

Duluth’s SAGE Direction Center protected us from nuclear threats and is remembered as a symbol of American innovation and commitment to our security.
A photo from the Feb. 17, 1958, Minneapolis Star newspaper
article showing the Duluth SAGE Direction Center (DC-10).

The former Duluth SAGE building as it stands today,
 repurposed into the Natural Resources Research Institute.


Friday, March 22, 2024

Sketchpad’s ‘Whirlwind’ graphical interaction

© Mark Ollig 


As World War II drew to a close, the United States embarked on Project Whirlwind.

At the time, “whirlwind” evoked images of fast, powerful, and unstoppable motion.

Whirlwind was a US Navy research project in 1944 to develop a universal flight trainer using an analog computer.

In 1947, after recognizing its potential for national defense, the Whirlwind project was redirected to the US Air Force, where its focus shifted to the construction of a high-speed digital computer.

Computing engineer Jay Forrester (1918 to 2016) led the development of the Whirlwind I digital computer at the Massachusetts Institute of Technology (MIT) servomechanisms laboratory in 1948.

The Whirlwind I computer became operational in 1951. It occupied over 2,000 square feet and could quickly process substantial amounts of data.

The computer’s visual output was viewed on a round cathode-ray tube (CRT), similar to those used on an oscilloscope.

The Whirlwind computer depended on 5,000 vacuum tubes acting as switches within its logic circuits for core functionality and performing calculations.

However, a single faulty tube could cause errors or system-wide disruption.

To maintain operational reliability, the Whirlwind team designed circuitry to identify potentially failing vacuum tubes and replace them before they caused disruptions.

The Whirlwind I quickly performed computations by automatically executing stored program instructions.

The computer was used for air defense tracking and processed radar data at a speed of 50,000 operations per second.

The Whirlwind I pioneered magnetic-core memory, a technology where small ferrite rings resembling miniature donuts used magnetic polarities to represent binary data (ones and zeros).

This significantly increased speed and reliability compared to earlier vacuum tube memory, such as Williams-Kilburn tubes.

This innovation, magnetic-core memory, became the dominant form of random-access memory (RAM) in computers from the mid-1950s to the mid-1970s.

The Whirlwind I computer consumed 100 kW of power and required a cooling system due to the heat generated by the vacuum tubes.

This digital computer introduced graphical output capabilities, allowing users to see computer-generated information in real time – a significant leap forward in human-machine interaction.

During the 1950s, the Whirlwind project became a significant part of the US military’s multi-billion-dollar Semi-Automatic Ground Environment (SAGE) air defense system project.

MIT’s groundbreaking research on the Whirlwind I computer, especially its rapid computational and graphical visual data display, drove the specifications for SAGE.

IBM was the primary contracted system builder for SAGE and worked directly with designs based on MIT’s work.

Other company contracts for this massive air defense project included AT&T, Burroughs, Western Electric, and the RAND Corp.

SAGE was a vast network of radar stations and command centers designed to counter potential bomber threats from the Soviet Union during the Cold War.

This complex system relied on large digital computers and associated networking equipment to coordinate data obtained from radar sites, producing a single unified image of the airspace over a wide area.

SAGE computing centers were installed across the US, including in Minnesota.

In 1954, a $5 million, four-story windowless SAGE concrete blockhouse was constructed in Hermantown, MN, next to the Duluth airport to strengthen the nation’s air defense against trans-polar Soviet air attacks.

The SAGE network provided essential data and coordination for the North American Aerospace Defense Command (NORAD).

The blockhouse building, called Duluth SAGE Direction Center DC-10, became operational in 1959, and it housed the latest radar and computer technology.

It played a vital role in air defense operations while safeguarding the airspace in the northern region of the US.
At its peak, there were 29 SAGE building centers; however, all had been decommissioned by 1983.

Afterward, the SAGE Direction Center DC-10 building was remodeled and given to the University of Minnesota Duluth in 1985.

The Whirlwind I computer led to the development of the experimental 1955 “Transistorized eXperimental” TX-0 computer, which evaluated the practicality of using transistor-based technology instead of vacuum tubes.

The TX-0 was followed by the TX-2, a research computer designed in 1956 by MIT physicist Wesley A. Clark (1927 to 2016) to explore advanced memory technologies and human-computer interaction.

The TX-2 transistorized computer used magnetic-core memory storage, a CRT display, and a light-pen stylus, offering users a new way to interact directly with a computer.

MIT student Ivan Sutherland, while working with the TX-2, unlocked the field of computer graphics with his software program, Sketchpad.

Sutherland’s 1963 Ph. D. thesis, “Sketchpad, A Man-Machine Graphical Communication System,” introduced the foundation of object-oriented programming.

This software allowed users to draw, manipulate, and interact with complex shapes, designs, and objects directly on the computer screen using a light pen.

The TX-2’s light pen detected light emitted from the CRT screen, allowing users to point at and directly manipulate displayed objects.

The elements were responsive, laying the groundwork for modern graphical interaction.

Using the light pen, users could directly draw their ideas onto the screen using Sketchpad’s interactive visual elements instead of relying on text-based interfaces.

On Feb. 1, 1972, Sutherland was granted US Patent No. 3,639,736 for his invention.

Ivan Edward Sutherland, born May 16, 1938, is known as the “father of computer graphics” through his creation of Sketchpad.

Ivan Edward Sutherland demonstrating the Sketchpad program,
 located on page 11 of his thesis,
 “Sketchpad, A Man-Machine Graphical Communication System.”
He referenced himself as “Author” in the typed text.



Friday, March 15, 2024

Telediagraph: sending, receiving images via telegraph

© Mark Ollig 


Ernest A. Hummel was born Oct. 15, 1865, in Germany’s Black Forest region (also known as Schwarzwald).

He immigrated to the US and settled in St. Paul, where he worked as a clock and watchmaker at A. L. Haman & Co., located in the Endicott building at 352 Robert St., near the Telegraph Cable Company.

In 1895, Hummel invented the telediagraph, a transmission and reception apparatus similar to a fax machine that could send hand-drawn images over telegraph wires to a receiving station.

In April 1900, Pearson’s Magazine of London, England, described the telediagraph as consisting of two almost identical machines – the “transmitter” and the “receiver.”

Each machine features an eight-inch cylinder precisely operated by clockwork-like mechanisms.

A fine platinum stylus, similar to a telegraph key, rests above the transmitter and receiver’s cylinder.

The stylus is positioned to draw on tinfoil wrapped around the cylinder and with carbon paper on the receiver.

The telediagraph allowed artists and telegraphy operators to begin reproducing drawn images over telegraph wires, which allowed newspapers to print timely pictures with their stories.

On Dec. 6, 1897, the Minnesota’s St. Paul Globe newspaper published a front-page article on Hummel’s telediagraph transmissions of hand-drawn images over telegraph wires.

Today, I will paraphrase and comment on parts of the article:

“Tests conducted yesterday [Sunday] over the Northern Pacific railroad system [telegraph line] demonstrated the successful electrical reproduction of hand-drawn pictures at a distance with a new, locally invented device.

The pictures were reproduced using electrical currents over telegraph wires that spanned the greater part of northern Minnesota.

Ernest A. Hummel, a jeweler with Haman & Co. in this city, has worked on a mechanism for electric picture reproduction for two years.

Mr. Hummel’s device is complicated, combining three or four different motive powers.

[In 1897, ‘motive powers’ referenced springs, gears, batteries, electric generators, compressed air, and fuels like oil.]

Those who witnessed the tests at the Northern Pacific general office yesterday confirmed the accomplishment of his goal.

His invention makes picture transmission as feasible as sending written or spoken words.

Both the transmitter and receiver apparatus are primarily constructed from brass for durability and occupies a space similar to that of a typewriter.

Each transmitter and receiver included a small electric motor that operates the carriage and moves the copying pencils back and forth across the area to be copied.

The transmitter’s carriage includes a projecting arm with a sharp platinum point.

This point moves in precise increments over the image using an ingenious automatic clockwork system.

This adjustment is controlled by a screw and a series of ratchet mechanisms, springs, and gearwheels, precisely regulating the spacing between drawn lines.

When the machine is connected to the electric circuit and the platinum point is in motion, each encounter with non-conductive material momentarily breaks the circuit.

This break activates a sharp needle on the receiver to etch a corresponding line.

When the platinum point passes over the conductive material, the circuit closes, and the needle lifts.

This process requires precise accuracy adjustments for both transmitter and receiver instruments to function harmoniously.

While the electric motor drives the carriage, the clockwork controls its speed, using a system of cogs and whirling fans (similar to a steam engine governor but using disks instead of spheres).

Preliminary trial runs conducted three weeks ago showcased the system’s potential.

The main challenge was the slowness, with the pointer taking thirty-eight minutes to cross the image.

Since then, Mr. Hummel has refined his invention, devoting his spare time to adjusting the mechanical parts for increased speed.

Yesterday, at about 11 a.m., Mr. Hummel connected his experimental machine to the regular business wires of the railroad, which are less crowded on Sundays than on other days.

At two minutes past 11 a.m., the transmitter pointer started moving over the traced features of Adolph Luetgert, a notorious criminal from Chicago.

This action was done within an electric circuit that covered 288 miles, extending from St. Paul to Staples and back.

Six relays provided an extra resistance of 600 ohms, equivalent to 40 miles of wire.

Despite the 328-mile distance, the machine functioned flawlessly, and within sixteen minutes, the image of Luetgert was faithfully reproduced at the receiving station.

Several Northern Pacific officials, electricians, and others witnessed this successful test.”

On March 19, 1898, the Minneapolis Daily Times newspaper wrote “News Pictures by Telegraph,” regarding Hummel’s invention.

“What the telegraph and telephone are to the news-gathering end of a great newspaper, the new ‘likeness-sender,’ will be to the great illustration department of a modern journal,” the newspaper said.

On June 1, 1898, the Sacramento Record-Union newspaper reported Mr. Hummel’s invention was tested by officials from the New York Herald newspaper and that it clearly replicated a 4.5-inch tin plate drawing of New York Mayor Van Wyck created by a sketcher using shellac and alcohol.

The drawing was transmitted six miles over a telegraph wire and accurately reproduced on Hummel’s receiving apparatus in 22 minutes.

Ernest A. Hummel, who passed away Oct. 10, 1944, at 78, was laid to rest in Oakland Cemetery in St. Paul.

His groundbreaking work in telediagraph technology paved the way for significant progress in electrically transmitted visual communication.

Drawings of H. R. Gibbs and Albert Scheffer were sent over the telediagraph.
 Gibbs was an early non-native resident who settled in Minnesota with her
 husband and started a farm. Scheffer was a businessman and an employee
of the Saint Paul Globe newspaper.

Ernest Hummel works with the telediagraph transmitter,
while two other work with the receiver.







Friday, March 8, 2024

An AI-quantum paradigm shift is underway

© Mark Ollig


AI and quantum computing are converging to create a powerful combination with the potential to dramatically alter how we solve some of our most complex problems.

Quantum computers use quantum mechanics (the physics of behavior at the atomic and subatomic levels) to perform calculations in ways impossible for traditional digital computers.

AI has revolutionized our interactions with technology through natural language processing and machine learning.

Smart devices now understand our spoken commands more accurately, making our interactions intuitive and natural.

AI, including neural networks (computer systems modeled after the human brain), is widely used in healthcare, education, communication, and our electronic smart devices.

When combined with quantum computing’s ability to process vast amounts of information quickly, AI pushes the technological boundaries even further.

Having worked with digital binary telecommunication systems during my telephone career, I was curious about the power of quantum computing.

Quantum computers are built with qubits, which are quantum bits capable of existing in a superposition of states, meaning they can represent multiple values simultaneously.

These qubits can also become entangled, meaning they become linked so strongly that what happens to one immediately affects the other, no matter how far apart they are.

The unique ability of qubits to exist in multiple states at once (like a coin having both heads and tails) allows them to explore many possibilities simultaneously.

In contrast, traditional digital computers operate on binary bits, either 0 (off) or 1 (on), representing high and low voltage states of electrical circuits.

Digital computers use binary representation and Boolean operations (AND, OR, NOT) to perform basic arithmetic functions.

Quantum computers harness the unique behaviors of the quantum world, allowing for unique calculations.

Their performance depends not only on qubit count but also on the design of quantum logic gates (instructions) and selected problem-solving algorithms, such as Shor’s (for factoring large numbers) and Grover’s (for searching databases).

Quantum gates manipulate qubits to perform calculations and are like mathematical formulas, and frameworks like IBM’s Qiskit allow for programming quantum computers using languages like Python.

Hadamard or CNOT quantum gates manipulate the states of qubits within a series of quantum algorithms to solve problems.

For example, quantum gates like Hadamard can put a single qubit in a state where it’s both 0 and 1 simultaneously, while CNOT gates can control two qubits together, acting like a switch where the state of the first qubit affects the second.

Quantum computers have the potential to explore numerous solutions simultaneously for certain types of problems by leveraging superposition, where qubits can exist in multiple states at once.

Superposition delivers a speed advantage over traditional digital computers, which must test solutions sequentially in a logical sequence.

This potential can be realized by developing proficient quantum algorithms and overcoming technological limitations.

Imagine a conventional binary bit coin with two distinct sides – heads and tails. It can only represent one of two values at a time: heads could represent 1, and tails could represent 0.

Now, imagine a qubit coin that spins incredibly fast and appears as a blur of both heads and tails.

This blur represents the qubit existing in a superposition of both states; it’s not just that you don’t know the outcome (whether it’s heads or tails); it truly hasn’t settled into one state or the other (it exists as both simultaneously).

Only when you stop the coin (similar to making a quantum mechanics measurement) does it collapse into a definite state of either heads or tails.

Entanglement is an even stranger phenomenon.

Two linked quantum particles become so connected that their states are intertwined – what happens to one particle immediately determines what happens to the other, regardless of the physical distance between them.

This quantum entanglement connection defies our everyday understanding of how objects interact.

Scientists primarily work with two types of qubits in quantum computers: superconducting and trapped ions.

Superconducting qubits are faster but require extremely low temperatures (around -454°F, close to absolute zero, which is -459.67 degrees Fahrenheit, while trapped ion qubits can store information more reliably but also require frigid temperatures (around -436°F).

Quantum computers require these frigid temperatures to minimize the disruptive effects of thermal electronic “noise” on delicate quantum states like superposition and entanglement.

This noise can cause qubits to lose their quantum properties (decoherence), delaying calculations.

Extreme cold is essential for quantum computers. It minimizes the vibrations of atoms, reducing thermal noise and allowing for longer, more reliable quantum operations and accurate results.

Superconducting qubits rely on the phenomenon of superconductivity, which only occurs at temperatures near absolute zero.

The future may see a “quantum internet,” enabled by a sophisticated AI-quantum architecture under development by organizations such as the Quantum Internet Alliance (QIA) and the Quantum Internet Task Force (QITF).

Combining AI’s analytical power and quantum computing’s processing platforms will lead to future discoveries and advancements far beyond what we can imagine today.

Mr. Spock from “Star Trek” would undoubtedly find it all “fascinating.”

IBM’s Heron is a 133-qubit quantum processor that uses techniques
 designed to reduce thermal noise errors and reliably manage up to 1800 gates
 within the stability times of its qubits.
Heron will be used with the new IBM Quantum System Two computer.