Tweet This! :)

Thursday, March 28, 2024

Protecting the US during the Cold War

© Mark Ollig 


The 1962 Cuban Missile Crisis demonstrated our nation’s strength and defense capabilities in preventing a nuclear conflict.

During this time, the Duluth SAGE Direction Center (DC-10) to the northeast of us was on high alert.

SAGE (Semi-Automatic Ground Environment) was a computerized defense system developed in the 1950s by the US military and MIT’s Lincoln Laboratory.

It played a crucial role in protecting the US and Canada against the threat of Soviet bomber attacks.

SAGE combined advanced radar and computer technology to detect hostile aircraft and coordinate countermeasures with guided missile systems like the Boeing CIM-10 Bomarc – a supersonic long-range surface-to-air missile.

SAGE did not launch the missiles autonomously; human authorization was required to ensure safety and control.

On Feb. 17, 1958, the Minneapolis Star newspaper reported on the construction of the Duluth SAGE Direction Center.

It wrote of its immense scale and cost, saying “its intricate system of computers leading up to the electric brain” required “six diesel-powered generators covering nearly half a square block” and a massive, air-conditioned water-cooling system using 250,000 gallons of water every 24 hours.

In 1958, the total cost of the Duluth SAGE Direction Center was estimated to be several times the $5 million used to construct its fortified, four-story windowless concrete structure with 10-inch-thick walls.

Its massive size is evident in the fact that it enclosed 3.5 acres of floor space.
Today, its structure costs alone would be approximately $48.5 million.

Duluth’s SAGE Center was strategically important as it was along potential Soviet bomber routes, reinforcing its significance within the US air defense network.

On Nov. 15, 1959, the Duluth SAGE Direction Center (DC-10) began operations.

Duluth’s SAGE facility processes relied upon two large IBM-built “combat direction central” digital computing systems: AN/FSQ-7, which stands for Army-Navy/Fixed Special Equipment (aka Q7) – the electric brain.

The Q7 system was based on the Whirlwind I computer developed at MIT in 1951 for the US Navy.

Lincoln Laboratory documented Q7’s software with flow charts, test specifications, manuals, and coding specs.

Both SAGE Q7 computers, working in active and hot standby mode, weighed around 250 tons.

The computers contained over 500,000 lines of software programming code, up to 60,000 vacuum tubes, thousands of electronic components, and miles of wiring.

The Q7, capable of executing 75,000 instructions per second, used magnetic tape core memory, magnetic drums as secondary storage, and magnetic tape drives for data archiving.

The computers included various input and output devices, such as teletype machines, printers, punch card readers, and light-sensing pens.

Military personnel interacted with the Q7 using different consoles, including the Long-Range Identification (LRI) Monitor Console, designed to detect and identify potential airborne threats and track commercial aircraft.

Due to the equipment, components, and circuitry, the Q7 required a 3,000-kilowatt (3-megawatt) power supply.

The Q7 worked in tandem with NORAD and other SAGE locations in providing real-time air traffic surveillance and air defense coordination.

SAGE console military operators used light-sensing pens to target the Q7-identified hostile targets on a round situation display scope on a system named AN/FSQ-8, which monitored the status of an entire sector.

After confirming the target, the Q7 transmitted its attack coordinates to a missile launch site installation.

This data transmission traveled over a secure high-speed SAGE telephone network, using redundancy and encryption protocol features.

Built deep underground, the North Bay SAGE Direction Center (DC-31) was located in North Bay, Ontario, Canada.

Minnesota’s Duluth SAGE location was also flanked by direction centers in North Dakota, Iowa, and Wisconsin, forming a solid northern tier line of defense against invasions of North American airspace.

On Thursday, Oct. 24, 1962, the US Strategic Air Command (SAC) was placed on DEFCON 2, the defense readiness condition indicating a high probability of a military conflict.

DEFCON 2 was initiated after learning the Soviet Union might launch nuclear missiles from Cuba.

The US protected its long-range strategic bombers by placing them on high alert and relocating those stationed in the southeast away from potential missile attacks from Cuba.

Several long-range turbojet Boeing B-47 Stratojet aircraft were reportedly relocated to Duluth and prepared for possible retaliatory strikes against the Soviet Union.

Fortunately, the Cuban Missile Crisis ended without a nuclear exchange.

The rapidly evolving technological advancements and increased capabilities of Soviet intercontinental ballistic missiles (ICBMs) during the 1960s and 1970s led to the decommissioning of the SAGE infrastructure.

In 1983, the Joint Surveillance System introduced a specialized land-based computing system called the AN/FYQ-93, used by the Army and Navy defense systems.

The AN/FYQ-93 is a computing system used for coordinating, surveilling, and communicating between the defense systems of the US and Canada.

In 1983, the Duluth SAGE Direction Center (DC-10) ceased operations.

In August of the same year, the General Services Administration approved the transfer of its empty building to the University of Minnesota.

In 1985, with $3.9 million in funding from the Minnesota State Legislature, the former Duluth SAGE Direction Center DC-10 facility was converted into the Natural Resources Research Institute by the university.

Duluth’s SAGE Direction Center protected us from nuclear threats and is remembered as a symbol of American innovation and commitment to our security.
A photo from the Feb. 17, 1958, Minneapolis Star newspaper
article showing the Duluth SAGE Direction Center (DC-10).

The former Duluth SAGE building as it stands today,
 repurposed into the Natural Resources Research Institute.


Friday, March 22, 2024

Sketchpad’s ‘Whirlwind’ graphical interaction

© Mark Ollig 


As World War II drew to a close, the United States embarked on Project Whirlwind.

At the time, “whirlwind” evoked images of fast, powerful, and unstoppable motion.

Whirlwind was a US Navy research project in 1944 to develop a universal flight trainer using an analog computer.

In 1947, after recognizing its potential for national defense, the Whirlwind project was redirected to the US Air Force, where its focus shifted to the construction of a high-speed digital computer.

Computing engineer Jay Forrester (1918 to 2016) led the development of the Whirlwind I digital computer at the Massachusetts Institute of Technology (MIT) servomechanisms laboratory in 1948.

The Whirlwind I computer became operational in 1951. It occupied over 2,000 square feet and could quickly process substantial amounts of data.

The computer’s visual output was viewed on a round cathode-ray tube (CRT), similar to those used on an oscilloscope.

The Whirlwind computer depended on 5,000 vacuum tubes acting as switches within its logic circuits for core functionality and performing calculations.

However, a single faulty tube could cause errors or system-wide disruption.

To maintain operational reliability, the Whirlwind team designed circuitry to identify potentially failing vacuum tubes and replace them before they caused disruptions.

The Whirlwind I quickly performed computations by automatically executing stored program instructions.

The computer was used for air defense tracking and processed radar data at a speed of 50,000 operations per second.

The Whirlwind I pioneered magnetic-core memory, a technology where small ferrite rings resembling miniature donuts used magnetic polarities to represent binary data (ones and zeros).

This significantly increased speed and reliability compared to earlier vacuum tube memory, such as Williams-Kilburn tubes.

This innovation, magnetic-core memory, became the dominant form of random-access memory (RAM) in computers from the mid-1950s to the mid-1970s.

The Whirlwind I computer consumed 100 kW of power and required a cooling system due to the heat generated by the vacuum tubes.

This digital computer introduced graphical output capabilities, allowing users to see computer-generated information in real time – a significant leap forward in human-machine interaction.

During the 1950s, the Whirlwind project became a significant part of the US military’s multi-billion-dollar Semi-Automatic Ground Environment (SAGE) air defense system project.

MIT’s groundbreaking research on the Whirlwind I computer, especially its rapid computational and graphical visual data display, drove the specifications for SAGE.

IBM was the primary contracted system builder for SAGE and worked directly with designs based on MIT’s work.

Other company contracts for this massive air defense project included AT&T, Burroughs, Western Electric, and the RAND Corp.

SAGE was a vast network of radar stations and command centers designed to counter potential bomber threats from the Soviet Union during the Cold War.

This complex system relied on large digital computers and associated networking equipment to coordinate data obtained from radar sites, producing a single unified image of the airspace over a wide area.

SAGE computing centers were installed across the US, including in Minnesota.

In 1954, a $5 million, four-story windowless SAGE concrete blockhouse was constructed in Hermantown, MN, next to the Duluth airport to strengthen the nation’s air defense against trans-polar Soviet air attacks.

The SAGE network provided essential data and coordination for the North American Aerospace Defense Command (NORAD).

The blockhouse building, called Duluth SAGE Direction Center DC-10, became operational in 1959, and it housed the latest radar and computer technology.

It played a vital role in air defense operations while safeguarding the airspace in the northern region of the US.
At its peak, there were 29 SAGE building centers; however, all had been decommissioned by 1983.

Afterward, the SAGE Direction Center DC-10 building was remodeled and given to the University of Minnesota Duluth in 1985.

The Whirlwind I computer led to the development of the experimental 1955 “Transistorized eXperimental” TX-0 computer, which evaluated the practicality of using transistor-based technology instead of vacuum tubes.

The TX-0 was followed by the TX-2, a research computer designed in 1956 by MIT physicist Wesley A. Clark (1927 to 2016) to explore advanced memory technologies and human-computer interaction.

The TX-2 transistorized computer used magnetic-core memory storage, a CRT display, and a light-pen stylus, offering users a new way to interact directly with a computer.

MIT student Ivan Sutherland, while working with the TX-2, unlocked the field of computer graphics with his software program, Sketchpad.

Sutherland’s 1963 Ph. D. thesis, “Sketchpad, A Man-Machine Graphical Communication System,” introduced the foundation of object-oriented programming.

This software allowed users to draw, manipulate, and interact with complex shapes, designs, and objects directly on the computer screen using a light pen.

The TX-2’s light pen detected light emitted from the CRT screen, allowing users to point at and directly manipulate displayed objects.

The elements were responsive, laying the groundwork for modern graphical interaction.

Using the light pen, users could directly draw their ideas onto the screen using Sketchpad’s interactive visual elements instead of relying on text-based interfaces.

On Feb. 1, 1972, Sutherland was granted US Patent No. 3,639,736 for his invention.

Ivan Edward Sutherland, born May 16, 1938, is known as the “father of computer graphics” through his creation of Sketchpad.

Ivan Edward Sutherland demonstrating the Sketchpad program,
 located on page 11 of his thesis,
 “Sketchpad, A Man-Machine Graphical Communication System.”
He referenced himself as “Author” in the typed text.



Friday, March 15, 2024

Telediagraph: sending, receiving images via telegraph

© Mark Ollig 


Ernest A. Hummel was born Oct. 15, 1865, in Germany’s Black Forest region (also known as Schwarzwald).

He immigrated to the US and settled in St. Paul, where he worked as a clock and watchmaker at A. L. Haman & Co., located in the Endicott building at 352 Robert St., near the Telegraph Cable Company.

In 1895, Hummel invented the telediagraph, a transmission and reception apparatus similar to a fax machine that could send hand-drawn images over telegraph wires to a receiving station.

In April 1900, Pearson’s Magazine of London, England, described the telediagraph as consisting of two almost identical machines – the “transmitter” and the “receiver.”

Each machine features an eight-inch cylinder precisely operated by clockwork-like mechanisms.

A fine platinum stylus, similar to a telegraph key, rests above the transmitter and receiver’s cylinder.

The stylus is positioned to draw on tinfoil wrapped around the cylinder and with carbon paper on the receiver.

The telediagraph allowed artists and telegraphy operators to begin reproducing drawn images over telegraph wires, which allowed newspapers to print timely pictures with their stories.

On Dec. 6, 1897, the Minnesota’s St. Paul Globe newspaper published a front-page article on Hummel’s telediagraph transmissions of hand-drawn images over telegraph wires.

Today, I will paraphrase and comment on parts of the article:

“Tests conducted yesterday [Sunday] over the Northern Pacific railroad system [telegraph line] demonstrated the successful electrical reproduction of hand-drawn pictures at a distance with a new, locally invented device.

The pictures were reproduced using electrical currents over telegraph wires that spanned the greater part of northern Minnesota.

Ernest A. Hummel, a jeweler with Haman & Co. in this city, has worked on a mechanism for electric picture reproduction for two years.

Mr. Hummel’s device is complicated, combining three or four different motive powers.

[In 1897, ‘motive powers’ referenced springs, gears, batteries, electric generators, compressed air, and fuels like oil.]

Those who witnessed the tests at the Northern Pacific general office yesterday confirmed the accomplishment of his goal.

His invention makes picture transmission as feasible as sending written or spoken words.

Both the transmitter and receiver apparatus are primarily constructed from brass for durability and occupies a space similar to that of a typewriter.

Each transmitter and receiver included a small electric motor that operates the carriage and moves the copying pencils back and forth across the area to be copied.

The transmitter’s carriage includes a projecting arm with a sharp platinum point.

This point moves in precise increments over the image using an ingenious automatic clockwork system.

This adjustment is controlled by a screw and a series of ratchet mechanisms, springs, and gearwheels, precisely regulating the spacing between drawn lines.

When the machine is connected to the electric circuit and the platinum point is in motion, each encounter with non-conductive material momentarily breaks the circuit.

This break activates a sharp needle on the receiver to etch a corresponding line.

When the platinum point passes over the conductive material, the circuit closes, and the needle lifts.

This process requires precise accuracy adjustments for both transmitter and receiver instruments to function harmoniously.

While the electric motor drives the carriage, the clockwork controls its speed, using a system of cogs and whirling fans (similar to a steam engine governor but using disks instead of spheres).

Preliminary trial runs conducted three weeks ago showcased the system’s potential.

The main challenge was the slowness, with the pointer taking thirty-eight minutes to cross the image.

Since then, Mr. Hummel has refined his invention, devoting his spare time to adjusting the mechanical parts for increased speed.

Yesterday, at about 11 a.m., Mr. Hummel connected his experimental machine to the regular business wires of the railroad, which are less crowded on Sundays than on other days.

At two minutes past 11 a.m., the transmitter pointer started moving over the traced features of Adolph Luetgert, a notorious criminal from Chicago.

This action was done within an electric circuit that covered 288 miles, extending from St. Paul to Staples and back.

Six relays provided an extra resistance of 600 ohms, equivalent to 40 miles of wire.

Despite the 328-mile distance, the machine functioned flawlessly, and within sixteen minutes, the image of Luetgert was faithfully reproduced at the receiving station.

Several Northern Pacific officials, electricians, and others witnessed this successful test.”

On March 19, 1898, the Minneapolis Daily Times newspaper wrote “News Pictures by Telegraph,” regarding Hummel’s invention.

“What the telegraph and telephone are to the news-gathering end of a great newspaper, the new ‘likeness-sender,’ will be to the great illustration department of a modern journal,” the newspaper said.

On June 1, 1898, the Sacramento Record-Union newspaper reported Mr. Hummel’s invention was tested by officials from the New York Herald newspaper and that it clearly replicated a 4.5-inch tin plate drawing of New York Mayor Van Wyck created by a sketcher using shellac and alcohol.

The drawing was transmitted six miles over a telegraph wire and accurately reproduced on Hummel’s receiving apparatus in 22 minutes.

Ernest A. Hummel, who passed away Oct. 10, 1944, at 78, was laid to rest in Oakland Cemetery in St. Paul.

His groundbreaking work in telediagraph technology paved the way for significant progress in electrically transmitted visual communication.

Drawings of H. R. Gibbs and Albert Scheffer were sent over the telediagraph.
 Gibbs was an early non-native resident who settled in Minnesota with her
 husband and started a farm. Scheffer was a businessman and an employee
of the Saint Paul Globe newspaper.

Ernest Hummel works with the telediagraph transmitter,
while two other work with the receiver.







Friday, March 8, 2024

An AI-quantum paradigm shift is underway

© Mark Ollig


AI and quantum computing are converging to create a powerful combination with the potential to dramatically alter how we solve some of our most complex problems.

Quantum computers use quantum mechanics (the physics of behavior at the atomic and subatomic levels) to perform calculations in ways impossible for traditional digital computers.

AI has revolutionized our interactions with technology through natural language processing and machine learning.

Smart devices now understand our spoken commands more accurately, making our interactions intuitive and natural.

AI, including neural networks (computer systems modeled after the human brain), is widely used in healthcare, education, communication, and our electronic smart devices.

When combined with quantum computing’s ability to process vast amounts of information quickly, AI pushes the technological boundaries even further.

Having worked with digital binary telecommunication systems during my telephone career, I was curious about the power of quantum computing.

Quantum computers are built with qubits, which are quantum bits capable of existing in a superposition of states, meaning they can represent multiple values simultaneously.

These qubits can also become entangled, meaning they become linked so strongly that what happens to one immediately affects the other, no matter how far apart they are.

The unique ability of qubits to exist in multiple states at once (like a coin having both heads and tails) allows them to explore many possibilities simultaneously.

In contrast, traditional digital computers operate on binary bits, either 0 (off) or 1 (on), representing high and low voltage states of electrical circuits.

Digital computers use binary representation and Boolean operations (AND, OR, NOT) to perform basic arithmetic functions.

Quantum computers harness the unique behaviors of the quantum world, allowing for unique calculations.

Their performance depends not only on qubit count but also on the design of quantum logic gates (instructions) and selected problem-solving algorithms, such as Shor’s (for factoring large numbers) and Grover’s (for searching databases).

Quantum gates manipulate qubits to perform calculations and are like mathematical formulas, and frameworks like IBM’s Qiskit allow for programming quantum computers using languages like Python.

Hadamard or CNOT quantum gates manipulate the states of qubits within a series of quantum algorithms to solve problems.

For example, quantum gates like Hadamard can put a single qubit in a state where it’s both 0 and 1 simultaneously, while CNOT gates can control two qubits together, acting like a switch where the state of the first qubit affects the second.

Quantum computers have the potential to explore numerous solutions simultaneously for certain types of problems by leveraging superposition, where qubits can exist in multiple states at once.

Superposition delivers a speed advantage over traditional digital computers, which must test solutions sequentially in a logical sequence.

This potential can be realized by developing proficient quantum algorithms and overcoming technological limitations.

Imagine a conventional binary bit coin with two distinct sides – heads and tails. It can only represent one of two values at a time: heads could represent 1, and tails could represent 0.

Now, imagine a qubit coin that spins incredibly fast and appears as a blur of both heads and tails.

This blur represents the qubit existing in a superposition of both states; it’s not just that you don’t know the outcome (whether it’s heads or tails); it truly hasn’t settled into one state or the other (it exists as both simultaneously).

Only when you stop the coin (similar to making a quantum mechanics measurement) does it collapse into a definite state of either heads or tails.

Entanglement is an even stranger phenomenon.

Two linked quantum particles become so connected that their states are intertwined – what happens to one particle immediately determines what happens to the other, regardless of the physical distance between them.

This quantum entanglement connection defies our everyday understanding of how objects interact.

Scientists primarily work with two types of qubits in quantum computers: superconducting and trapped ions.

Superconducting qubits are faster but require extremely low temperatures (around -454°F, close to absolute zero, which is -459.67 degrees Fahrenheit, while trapped ion qubits can store information more reliably but also require frigid temperatures (around -436°F).

Quantum computers require these frigid temperatures to minimize the disruptive effects of thermal electronic “noise” on delicate quantum states like superposition and entanglement.

This noise can cause qubits to lose their quantum properties (decoherence), delaying calculations.

Extreme cold is essential for quantum computers. It minimizes the vibrations of atoms, reducing thermal noise and allowing for longer, more reliable quantum operations and accurate results.

Superconducting qubits rely on the phenomenon of superconductivity, which only occurs at temperatures near absolute zero.

The future may see a “quantum internet,” enabled by a sophisticated AI-quantum architecture under development by organizations such as the Quantum Internet Alliance (QIA) and the Quantum Internet Task Force (QITF).

Combining AI’s analytical power and quantum computing’s processing platforms will lead to future discoveries and advancements far beyond what we can imagine today.

Mr. Spock from “Star Trek” would undoubtedly find it all “fascinating.”

IBM’s Heron is a 133-qubit quantum processor that uses techniques
 designed to reduce thermal noise errors and reliably manage up to 1800 gates
 within the stability times of its qubits.
Heron will be used with the new IBM Quantum System Two computer.


Friday, March 1, 2024

Spider: a lunar rehearsal in Earth orbit

© Mark Ollig

On Sept. 12, 1962, President John F. Kennedy spoke at Rice University in Houston, TX, where he set the goal of landing a man on the moon and bringing him back safely to Earth before the decade’s end.

While in Houston, President Kennedy visited what is now known as the Johnson Space Center.

During his visit, he was shown a mock-up of the lunar excursion module for use on the moon.
Although initially designated the lunar excursion module and known by the acronym LEM, it was later shortened to lunar module, with the acronym LM.

At 11 a.m. EST, March 3, 1969, three astronauts aboard the Apollo 9 spacecraft lifted off aboard a Saturn V rocket from Launch Pad 39A at NASA’s Kennedy Space Center (KSC) on Merritt Island, FL.

The F-1 engines on the Saturn V rocket were the most powerful single-chamber liquid-fueled engines developed at the time.

These engines generated 7.5 million pounds of thrust, propelling the Apollo 9 spacecraft into Earth orbit.

Astronauts on board were Commander James A. McDivitt (1929 to 2022), command module pilot David R. Scott (1932 to present), and lunar module pilot Russell L. Schweickart (1935 to present).

Following liftoff, the flight was managed by Mission Control at the Johnson Space Center under Flight Director Eugene Kranz.

Apollo 9’s primary objective was to test all aspects of the lunar module in space, ensuring it was ready for lunar operations.

During their orbit around the Earth, two astronauts aboard the lunar module (LM) nicknamed “Spider” conducted tests and practiced rendezvous and docking maneuvers with the command-service module (CSM) called “Gumdrop.”

The lunar module was nicknamed “Spider” due to its spidery leg-like shape, while the command-service module was called “Gumdrop” because of its candy-like appearance when it arrived at KSC wrapped in blue cellophane.

The Apollo command-service module was designed to support the crew during the mission and provide life support and operational functions.

The detachable command module (CM) spacecraft ensured their safe return to Earth.

During liftoff, the lunar module was positioned beneath the CSM with its legs folded inside the spacecraft-to-lunar module adapter (SLA) compartment attached inside the third stage of the Saturn V rocket.

The astronauts conducted a series of five engine burns on the CSM’s service propulsion system (SPS) to simulate various scenarios, including LM rescues.

The docking and rendezvous maneuvers of the CSM and LM began with extracting the lunar module from its “garage” inside the SLA.

These exercises were necessary for testing the docking mechanism’s reliability, specifically when firing the SPS engine.
McDivitt and Schweickart then pressurized the tunnel connecting Gumdrop and Spider by removing the hatch of the CSM and attaching the umbilical connectors, providing power, communications, and life support for the LM.

During the Apollo 9 mission, Scott performed an extravehicular spacewalk.

Floating through the open hatch of the CM, he took photographs, collected thermal samples from the spacecraft’s exterior, and verified the operation of equipment and various procedures for the upcoming Apollo 11 moon landing.

Schweickart and McDivitt conducted practice maneuvers of the LM descent and ascent engines firing on orbital change patterns, simulating lunar-orbit rendezvous, and backup abort procedures.

Schweickart would perform a spacewalk outside the LM wearing the portable life support system (PLSS) backpack.

The PLSS maintained suit pressure, provided breathable oxygen, filtered out contaminants, circulated and maintained comfortable environmental temperatures, maintained oxygen levels, enabled communications, and monitored his overall health parameters.

Schweickart confirmed that all the systems in the PLSS backpack functioned accurately.

He could effectively work outside the lunar module and complete tasks in the weightless environment future astronauts would experience on the moon.

A seven-pound Westinghouse camera designed for the moon landing was successfully tested; the pictures taken were described as “spectacularly clear.”

Inside the LM ascent stage (crew cabin), McDivitt and Schweickart separated from the attached lunar module descent stage (lower platform), which burned up during re-entry into the Earth’s atmosphere nine days later.

After firing the LM ascent stage engine, McDivitt and Schweickart navigated the spacecraft to a lower orbit to line up for a rendezvous with the CM, which Scott piloted.

They successfully rendezvoused and docked both their spacecraft.

After McDivitt and Schweickart re-entered the CM, the empty LM ascent stage was detached and floated away.

On March 13, 1969, the Apollo 9 command module separated from the service module, re-entered Earth’s atmosphere, and splashed down in the Atlantic Ocean at 12:00:54 p.m. EST, concluding a mission that lasted 241 hours.

The LM ascent stage remained in Earth’s orbit until Oct. 23, 1981, when it disintegrated upon re-entering the planet’s atmosphere.

Today, the California San Diego Air and Space Museum displays the Apollo 9 command module “Gumdrop.”

Apollo 9’s success led to Apollo 10, the dress rehearsal for the lunar landing, and finally, the historic Apollo 11 lunar landing, nearly seven years after Kennedy’s speech at Rice University.

My model of the Apollo 9 lunar module docked with the
command-service module with added labels

My model of the docked Apollo-Lunar Module and Command-Service Module