Tweet This! :)

Thursday, April 30, 2026

Q7: The computer that guarded North America

@Mark Ollig

In 1955, IBM tested the XD-1, a new computer that brought large-scale, real-time computing into US air defense.

It was evaluated by IBM and MIT engineers at Lincoln Laboratory in Lexington, MA, for use in what became SAGE, the Semi-Automatic Ground Environment air-defense system.

The XD-1’s official military designation became AN/FSQ-7: AN for Army-Navy, ‘F’ for fixed installation, ‘S’ for special or combination equipment, and ‘Q’ for special-purpose equipment, with seven as the model number. It was referred to as Q7.

Q7 was built on earlier work with MIT’s Whirlwind II real-time computer and 1950 radar-to-digital data-link experiments, laying the groundwork for SAGE.

During the Cold War, the United States and Canada needed a way to spot long-range Soviet bomber warplanes coming over the Arctic.

At its peak, SAGE used hundreds of radars, 24 direction centers, and three combat centers.

Each SAGE direction center was inside a windowless, reinforced concrete building.

These blockhouses protected both the computers and the people working there from blasts and nuclear fallout.

The four-story SAGE direction center buildings were built on huge foundation slabs.

They had to support two AN/FSQ-7 computers, each weighing about 250 tons, plus massive power, cooling, and command equipment.

These computers handled radar data, showed aircraft tracks on operator screens, and helped guide interceptor jets and missiles against Soviet bombers carrying nuclear weapons into North America.

By 1958, more than 7,000 IBM employees worked on the Q7 project. This included engineers, senior managers, and technical liaisons who worked with the military on installation, operation, and maintenance.

In Minnesota, the SAGE direction center in Duluth kept watch over the northern skies.

A Minneapolis Star article Feb. 17, 1958, describes the Duluth SAGE building as a “windowless four-story concrete blockhouse.”

The building, with walls of poured concrete 18 inches thick, cost approximately $5 million.

It required the “installation of a huge powerhouse” using six diesel-powered generating units covering nearly half a city block.

The article said the units and the “intricate internal electronics to link them up to the electric brain and other equipment” cost somewhere between $15 million and $50 million.

The Duluth SAGE facility used an air-conditioned cooling system requiring 250,000 gallons of water every 24 hours.

MIT developed the magnetic-core memory for the Q7, using tiny ferromagnetic rings to store binary values (1 or 0) based on their magnetization direction.

Electrical pulses traveled through wires that passed through the rings to read and write data, which remained stored even if the power went out.

This gave the computer a fast, reliable way to store radar data obtained from across North America.

IBM magnetic tape drives and magnetic drums served as extra storage. Punched cards were used to load the first program data.

The Q7 system had more than 500,000 lines of machine-language code and could run about 75,000 instructions every second, impressive speed for the 1950s.

Technicians did regular maintenance on plug-in modules that held components such as resistors, capacitors, and vacuum tubes.

The module design made it easier for them to remove and replace parts, and the computer’s narrow corridors could be walked between for inspecting, adding to, and maintaining the computer’s wiring.

Working day and night, technicians replaced parts and fixed problems without turning off the Q7 system, making it one of the first real-time, always-on computers.

The 32-bit AN/FSQ-7 system used almost 60,000 vacuum tubes, with about 49,000 inside the computers themselves, to handle its logic operations.

All those vacuum tubes produced so much heat that the system needed industrial-scale cooling.

At each SAGE direction center, Western Electric installed six 650‑kilowatt diesel generators, providing about 3.9 megawatts of capacity to meet the complex’s roughly 3‑megawatt power demand, about as much electricity as a small town would use.

The IBM AN/FSQ-7 computer was officially accepted at McGuire Air Force Base in New Jersey June 26, 1958, becoming fully operational by July 1.

To do its job, the AN/FSQ-7 depended on a major communications breakthrough to receive data from radar sites.

Early AT&T Bell System data sets (modems) sent information at 110 bits per second, but SAGE used a special AT&T digital network that reached about 1,300 bits per second over dedicated phone lines.

Bell System engineers turned regular telephone circuits into a high-speed data network by conditioning the lines, ensuring digital signals remained clear from end to end.

Radar sites all over North America measured each detected aircraft’s distance and direction.

That information was prepared for digital transmission over conditioned AT&T circuits to the SAGE direction centers.

The AN/FSQ-7 computers processed incoming data in near real time as their software prioritized potential threats from Soviet bombers and other unknown aircraft, updating the airspace picture.

Inside SAGE, console operators sat in windowless rooms and watched blinking dots move across large, round cathode-ray tube screens, tracking aircraft in real time.

These rooms were known for their dim blue lighting, which reduced glare and allowed operators to use optical sensors called light guns.

Operators pointed these light guns at target blips on the radar screens to select specific aircraft and coordinate a response.

AT&T built a digital defense network by linking SAGE through about 25,000 telephone lines across North America.

The network relied on Western Electric for signal repeaters, high-gain carrier amplifiers, and special 102A switching systems, which helped quickly route radar and voice traffic between sites and the AN/FSQ-7 computers.

In the early 1960s, ICBMs diminished SAGE’s strategic value, but it continued to provide real-time airspace monitoring until its decommissioning in January 1984.

Today the former Duluth SAGE building, remodeled in the 1980s with windows, houses the University of Minnesota Duluth’s Natural Resources Research Institute.

On the lower left of the AN/FSQ-7 operator console, a standard mid‑1950s rotary desk telephone is mounted upright in a fixed cradle, with the dial plate directly below the handset and its coiled cord disappearing into the console.

It is a strong visual reminder that, in the end, a human voice still made the final decisions over the computer’s logic.



Friday, April 24, 2026

From slow-loading images to appearing instantly

@Mark Ollig

Do you recall in 1995, sitting at your computer and listening to the modem’s beeps, screeches, and bursts of static as it negotiated a connection with a bulletin board system or a commercial service like America Online?

I do, all while hoping no one picked up on the extension phone.

Typical modem speeds back then were around 28.8 kbps, and downloading a 1-MB file usually took about five to seven minutes.

As an image downloaded, our patience would be tested as it slowly appeared on the screen, one scan line at a time from top to bottom.

I am not sure today’s youth would have the patience to watch that on their smartphones.

Faster digital service was also available from the local telephone company over existing copper lines through Integrated Services Digital Network, or ISDN.

Using its Basic Rate Interface, or BRI, service, customers had two 64-kilobit-per-second B channels that could be bonded for data speeds up to 128 kbps.

It used a terminal adapter, or TA, rather than a standard analog modem.

At 128 kbps, downloading a 1 MB file typically took about 60 to 65 seconds.

Before websites took off, many companies operated their own in-house dial-up bulletin board systems, or BBSs, allowing users to browse products and make purchases.

Local TV and radio stations often advertised their BBS phone numbers for the public to call in, and newspapers were launching their own systems too.

Schools, universities, and city governments across the country set up BBSs to share information and connect with students, parents, and residents.

In the early 1990s in Minnesota, businesses used BBS platforms where customers could check prices, browse catalogs, and place orders before the internet became common.

Minneapolis-area TV and radio stations promoted their BBS lines for weather, sports, and schedules: a local, text-based online service that predated the World Wide Web.

I started my BBS around 1992, with the desire to become an active participant in the growing local online community scene.

At the time, I subscribed to and learned a lot from a BBS-themed magazine called “Boardwatch,” started by Jack Rickard.

I also watched the PBS program “Computer Chronicles,” hosted by Stewart Cheifet.

Many BBSs ran on a popular software program called “The Major BBS” developed by Galacticomm.

I installed this BBS software on my computer, along with six dedicated local telephone lines connected to 19.2 kbps Hayes modems.

My BBS was called “WBBS OnLine!” (Winsted Bulletin Board Service).

By the mid-1990s, national estimates suggested there were around 60,000 BBSs operating in the US.

In Minnesota alone, archived regional directories show that hundreds of dial-up BBSs were active at the time.

Their use in education was fairly limited, usually restricted to classrooms or labs with only a handful of connected computers.

An Aug. 14, 1995, article in the Willmar, MN, newspaper, West Central Tribune, described senior citizens exploring “cyber space,” with local classes helping older adults.

Many with little or no computer experience were made comfortable with computers and dial-up BBSs.

Seniors could send electronic mail over telephone lines, look up recipes and health information, and chat online with other older adults around the world.

“Just because you’re over 60 doesn’t mean you stop looking forward to tomorrow,” social worker Alice Munro Hilliard said. “Their willingness to learn is phenomenal.”

The article also noted that about 5,000 people age 50 and older nationwide subscribed to the SeniorNet bulletin board system, showing how dial-up communities were already drawing in a wide range of users.

The World Wide Web, or the web, operates over the internet. Mosaic, one of the first widely used web browsers, launched in 1993.

During the mid-1990s, most websites consisted of text and links, with graphics and images that loaded slowly, and BBSs and mainstream commercial dial-up services did not support streaming video.

By 1995, Netscape Navigator dominated the browser market, though Internet Explorer had also entered the scene.

During the 2020-21 COVID-19 pandemic, internet use became vital for work and education, highlighting how far connectivity has advanced since 1995.

Web browsers like Chrome and Safari became standard, while Microsoft shifted from Internet Explorer to Edge.

Education and commerce changed, with live-streaming lessons and global marketplaces becoming common.

Today, internet data travels over fiber-optic cables and optical transport systems, commonly operating at 100 to 400 Gbps, while carriers such as Windstream and Zayo Group have demonstrated 1 Tbps (terabits per second) transmission speeds in real-world trials.

In 2025, about six billion people were online, roughly three-quarters of the world’s population.

Many home internet connections in the United States now exceed 200 megabits per second, about 7,000 times faster than the 28.8 kbps modem connections common in the mid-1990s.

We have come a long way from the days when a single megabyte image took minutes to load, appearing one line at a time on the screen, to today, when it appears almost instantly.

Illustration showing how Google’s Nano Banana 2 turns a user’s
request into a finished 4K image through Gemini 3.1 Flash Image. 
Photo by Gemini Nano Banana 2 and Mark Ollig.
























Thursday, April 16, 2026

Minnesota’s early school connected computer network

@Mark Ollig

During the 1965-66 school year at University High School in Minneapolis, teachers installed a Model 33 ASR teletype.

The machine looked like a sturdy typewriter and produced a steady clatter as it typed onto paper.

Using its keyboard and a long-distance telephone connection, students logged in to Dartmouth College’s time‑sharing mainframe computing network in New Hampshire.

In early 1967, 18 school districts around Minneapolis and St. Paul formed a cooperative called the Minnesota School Districts Data Processing Joint Board, known as Total Information for Educational Systems (TIES).

They shared the costs and computer resources of the Sperry-Rand UNIVAC (Universal Automatic Computer) 1110 located in the St. Paul area.

A West Central Tribune article May 12, 1967, described TIES’ purpose as “establishing and conducting a data processing center for the service of said school districts and other school districts located in the state of Minnesota.”

Using standard telephone lines and acoustic modems to link teletype terminals with distant mainframes, multiple school districts established dial-up connections that allowed them to share computing power for both administrative tasks and classroom teaching.

The teletype served as both an input device and output printer, allowing students to type commands and receive responses from the remote mainframe computer on paper.

By having school districts share the costs of the Sperry-Rand UNIVAC mainframe hardware, software, and technical support, interactive computing was made available to schools that otherwise could not afford it.

By late 1967, TIES was serving more than 130,000 students.

Central to that classroom experience was the Model 33 ASR (Automatic Send-Receive) teletype terminal, made by Teletype Corporation, an AT&T subsidiary.

These electromechanical machines transmitted seven-bit ASCII (American Standard Code for Information Interchange) over telephone lines at 110 baud to remote mainframe computers.

To put it in perspective, sending data at 110 baud, about 10 characters per second or 0.00011 Mbps, was over 900,000 times slower than a typical 100 Mbps home internet connection today.

While the Model 33 ASR sent and received data as electrical signals over the telephone line to and from the remote mainframe, the teletype contained no microprocessor or modern digital logic.

Instead, it used electromechanical circuitry.

Incoming serial data was received as start-stop pulses, typically at 110 baud, and passed through a distributor mechanism that synchronized the signal with the machine’s internal motor-driven timing.

Each seven-bit ASCII character, along with start and stop bits, was decoded by a series of selector magnets that actuated mechanical linkages, cams, and code bars.

These components positioned the print mechanism to strike the correct character on paper.

The same signals could also drive the paper tape punch, where hole patterns encoded each character for storage and later playback.

A Model 33 ASR terminal cost around $1,000 at the time, equivalent to about $10,750 today.

To connect to the remote mainframe, the terminal used either a built-in call-control unit, which included a rotary telephone dial, mode selector, and line controls, or an acoustic coupler with a standard telephone handset on the line.

With the built-in call-control unit, the user placed the call directly from the teletype itself. With an acoustic coupler, the user dialed on a standard telephone, waited for the answering tones, then set the handset into the teletype’s rubber cups so the modem could send and receive data over the line.

Because telephone networks carried analog signals while computers communicated digitally, a modem converted the teletype terminal’s outgoing digital data into analog audio tones for transmission over the phone line, then demodulated incoming tones from the remote mainframe back into digital data.

The teletype didn’t have a screen; instead, it printed both the student’s input and the computer’s output on long rolls of light-yellow canary paper.

The Model 33 ASR featured a built-in eight-hole punched-tape reader and punch, which functioned as an early form of offline data storage.

As students typed, it punched one‑inch‑wide paper tape with hole patterns encoding each character, allowing them to save and later reload their programs into the mainframe without retyping them during a dial‑up session.

The 1970-71 school year saw more than 26,000 students logging into the TIES computer network.
TIES formed one of the nation’s first school-based online communities, learning and sharing BASIC (Beginners’ All-purpose Symbolic Instruction Code) and FORTRAN (Formula Translation) programs, information, and messages across school districts.

The computer network let students across Minnesota exchange messages and programs, including math games, YAHTZE (a computer version of Yahtzee), and simulations such as “Hamurabi,” “Sumer,” “Lunar Lander,” and “Star Trek.”

In 1971, Minnesota student teachers Don Rawitsch, Bill Heinemann, and Paul Dillenberger created “The Oregon Trail. “

Heinemann and Dillenberger programmed it in HP (Hewlett-Packard) Time-Shared BASIC on a Minneapolis school district minicomputer, while Rawitsch handled research and design.

In 1973, the Minnesota Legislature created the Minnesota Educational Computing Consortium (MECC) to expand computer access for students across the state.

That same year, MECC spent about $7 million (roughly $50 million today) on a UNIVAC 1110 mainframe built by Sperry Rand’s Univac division in St. Paul, installing it at 1925 Sather Street in Lauderdale, Minnesota.

This powerful mainframe computer became the heart of MECC’s groundbreaking statewide time‑sharing mainframe computing network, giving hundreds of schools remote access to educational software.

To manage long-distance toll costs, MECC used AT&T Wide Area Telephone Service, or WATS, and Foreign Exchange, or FX, lines through the telephone company.

WATS provided national inbound 800-number access to the mainframes through the early toll-free service AT&T established in May 1967.

FX lines were dedicated circuits used for in-state connections, letting a school connect its teletype to the dial tone of a remote telephone exchange where the mainframe’s number was local.

That allowed the school to call the mainframe as if it were in the same exchange, avoiding regular long-distance toll charges.

Like WATS, FX service was billed at a flat monthly rate, which was lower than standard long-distance toll-call charges.

A personal note: while working at the Winsted Telephone Company, we installed and maintained many national inbound and outbound WATS lines and state FX lines for local businesses, most of them tied to the Twin Cities’ large toll-free metro calling area.

During the 1970s, Winsted only had toll-free calling to Lester Prairie.

The Blooming Prairie Times reported April 23, 1975, that math and biology students used a teletype machine connected by modem to Austin High School, which relayed data to the UNIVAC 1110.

By 1977, the UNIVAC 1110 at 1925 Sather Street was retired due to high user demand and system latency that prevented it from meeting MECC’s performance contract.

It was replaced by the Control Data Corporation Cyber 73, which supported up to 448 simultaneous connections, more than 5,000 daily sessions, and approximately 2,000 teletype terminals across the statewide network.

In 1978, schools began shifting from remotely accessed mainframes to compact stand-alone microcomputers, allowing districts to own their machines and eliminate dial-up costs and latency.

MECC sought bids for a standard classroom computer and received proposals from Radio Shack, Commodore, and Apple.

International Business Machines (IBM) did not bid, as its personal computer would not be released until 1981.

MECC selected the Apple II and purchased 500 units for $649,000 ($3.4 million today).

The TIES cooperative, which first gave Minnesota students access to a central time‑sharing mainframe computing network, dissolved in 2018.

Today, the clatter of teletypes lingers only in memory, yet the digital pathways forged by those early telephone lines and circuitry laid the foundation for Minnesota’s first school‑connected computer network.



Thursday, April 9, 2026

Apollo 8 and Artemis II: no lifeboat

@Mark Ollig


After learning that Artemis II would proceed without an attached lunar lander, I was reminded of Apollo 8.

In 1968, I wondered what would happen if something went wrong and there was no lunar module (LM) aboard as a lifeboat.

Engineering delays with the LM led Apollo 8 to abandon planned Earth-orbit testing.

NASA sent three Apollo 8 astronauts into lunar orbit in the command and service module (CSM), without an attached LM.

Apollo Spacecraft Program manager George M. Low called the mission “audacious and not without some risk.”

With time running out to meet Kennedy’s goal and the Soviet Union advancing, NASA moved ahead.
NASA chose a free-return trajectory so if the service module engine failed on the way to the moon, the spacecraft would loop around the moon and return safely to Earth, using the moon’s gravity to swing it home.

Apollo 8 launched from Pad 39A Dec. 21, 1968, at Kennedy Space Center.

Apollo 8 astronauts Frank Borman, James Lovell Jr., and William Anders became the first people to leave Earth, reach, and orbit the moon, completing 10 orbits.

It came within 77.6 miles of the lunar surface.

When an oxygen tank exploded in the Apollo 13 service module April 13, 1970, disabling the command module, the astronauts moved into Aquarius, the lunar module, for life support and engine burns.

They later reentered the command module and used its remaining power to return to Earth.

Had that failure occurred on Apollo 8, there would have been no lifeboat.

From Apollo 10 through 17, the LM traveled attached to the CSM as backup.

After two astronauts lifted off in the ascent stage and rejoined the third astronaut in the CSM, the ascent stage was released into lunar orbit, where it later crashed onto the moon.

Apollo 10 was the exception; its ascent stage was sent into orbit around the sun.

NASA’s Artemis II mission launched April 1 from Launch Complex 39B at Kennedy Space Center, using the Space Launch System.

The SLS stands about 322 feet tall and produces more than 8.8 million pounds of thrust, compared with the Apollo-era Saturn V at 363 feet and 7.5 million pounds.

This is the first crewed Artemis flight and the first crewed mission around the moon since Apollo 17.

Commander Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen flew aboard Orion, named Integrity, without a lunar lander.

Rather than entering lunar orbit, Integrity circled the far side of the moon on a free-return path similar to Apollo 8.

The Apollo service module was almost 25 feet tall and housed fuel cells, cryogenic oxygen and hydrogen supplies, propellant tanks, and the Service Propulsion System engine.

Artemis II’s European Service Module (ESM) was built by Airbus Defense and Space under contract to the European Space Agency.

It is about 13 feet tall and houses Orion’s main engine, propellant tanks, water, oxygen, nitrogen, thermal-control hardware, and four solar array wings for onboard power.

Lockheed Martin built the Orion crew module, the capsule where astronauts live and work during the mission.

Together, the crew module and ESM made up the Orion spacecraft for most of the mission.

Apollo 8’s command module was 12 feet 10 inches wide and offered about 210 cubic feet for three astronauts.

Orion’s crew module is 16 feet 6 inches in diameter and has about 316 cubic feet of space for four.

It features a dedicated toilet and a Universal Waste Management System using airflow technology similar to the International Space Station, unlike Apollo crews, who used waste-collection bags and a handheld device.

Apollo 8 astronauts used the DSKY (display and keyboard) with the Apollo Guidance Computer, typing verb and noun codes on a panel with mechanical keys and green electroluminescent numeric readouts.

Verb told the computer what operation to perform; noun told it which data or system to act upon, with the active codes and results shown in three stacked rows of digits.

Orion’s cockpit centers on three large flat-panel glass displays, each reconfigurable for flight data, navigation, systems status, or procedures.

Surrounding the screens are switch panels with guarded and unguarded switches and rotary selectors, plus a cursor control device so astronauts can select on-screen items and still execute critical commands by hard switch if a display or controller fails.

Honeywell Aerospace provides two Vehicle Management Computers, each containing two flight modules, for a total of four.

These modules are connected via a triple-redundant Time-Triggered Gigabit Ethernet network that links sensors, propulsion, and life-support systems while isolating faults if a unit fails.

Honeywell’s navigation technology includes inertial measurement hardware made in Minnesota.

While Apollo’s CSM drew power from three fuel cells and carried backup batteries, Artemis’s Orion module uses lithium-ion batteries and solar array wings.

Orion is equipped with four 120-volt lithium-ion batteries powered by four solar array wings.

The Crew Survival System suits can keep astronauts alive for up to 144 hours in case the cabin loses pressure or becomes contaminated.

Communications advanced from Apollo’s voice transmissions and basic data over the Deep Space Network to Orion’s voice, video, navigation, and science data.

The Artemis II Orion spacecraft also carried the Orion Optical Communications System, which transmitted 4K video at up to 260 megabits per second using infrared light.

After launch, Orion entered a highly elliptical orbit around Earth that extended to about 46,000 miles above the planet.

The trans-lunar injection (TLI) command sent Integrity on a four-day trip to the moon April 2, the first TLI since Apollo 17.

Orion began close lunar observations April 6 at its nearest point to the moon during its flyby.

The spacecraft descended for landing in the Pacific Ocean using two drogue parachutes, three pilot parachutes, and three main parachutes, although it could land safely with two mains.

After splashdown, five airbags righted Orion.

Artemis III, set for 2027, will test docking operations with commercial lunar landers in low Earth orbit, similar to Apollo 9 in 1969.

Artemis IV, scheduled for 2028, will send Orion and its crew to lunar orbit, where they will dock with a lander sent ahead on an uncrewed mission.

Two astronauts will then enter the lunar lander and descend to the surface, making this the first crewed moon landing since Apollo 17 in 1972.

Unlike the Apollo moon-landing missions, the Artemis IV crew will travel 240,000 miles to the moon without a backup vehicle.

In a March 9 audit, the NASA Office of Inspector General warned that NASA lacks the means to rescue stranded astronauts in space or on the lunar surface.

NASA previously developed backup rescue plans for Skylab and the Space Shuttle, including a modified Apollo ready to save a stranded Skylab crew in 1973.

Following the Columbia disaster Feb. 1, 2003, the Columbia Accident Investigation Board concluded Shuttle Atlantis could have been launched on an emergency mission if the tile damage had been identified in time.

Since retiring the shuttle in 2011, NASA has not maintained a dedicated rapid in-space rescue capability for disabled crewed spacecraft, relying instead on each vehicle’s ability to return to Earth safely, including Orion.

Minnesota companies supply important technology for Artemis II.

Stratasys in Eden Prairie made more than 100 3D-printed parts for the Orion spacecraft using specialized thermoplastic.

Honeywell Aerospace Technologies in Plymouth supplied guidance and navigation systems, command-and-data-handling hardware, display and control units, and core flight software.

PaR Systems (Precision Automated Robotics) in Shoreview supplied friction-stir welding technology used to manufacture major Space Launch System and Orion structures for Artemis II.

The Apollo 8 command module is displayed at the Museum of Science and Industry in Chicago.



Thursday, April 2, 2026

Automation and AI maintain circuit board production

@Mark Ollig

Your vehicles, smartphones, computers, and other electronic devices depend on circuit boards filled with hundreds or even thousands of tiny electronic parts.

These parts include resistors, capacitors, diodes, transistors, integrated circuits, connectors, inductors, relays, and switches.

They also include sensors, voltage regulators, crystals, oscillators, logic gates, and memory chips.

There are more, but I think you get the idea.

Picking, placing, and monitoring those parts during production demands considerable effort that most people never see.

Every completed circuit board relies on a precise system that ensures each tiny part is placed in the exact location and verified for quality.

This week’s column centers on my son Daniel, whose full-time production facility in Minnesota manufactures industrial electronic modules.

The company is family-owned, and Daniel works side by side with his son, my grandson.

Recently, they installed an SMT HW-T8-72/80F automated pick-and-place machine for prototyping and assembling printed circuit boards, or PCBs.

Much of their work happens behind the scenes, but the products they build help keep industrial equipment, vehicles, and control systems running across the country.

The SMT HW-T8-72/80F pick-and-place machine is manufactured in China by Beijing Huawei Silkroad Electronic Technology Co. Ltd., which exports its surface-mount equipment worldwide, including to customers in the United States.

The machine measures 4 feet, 6 inches by 4 feet, 8 inches by 4 feet, 7 inches and weighs about 1,100 pounds.

It operates on 220-volt alternating current and requires compressed air to power its pneumatic vacuum and motion systems.

I watched the SMT HW-T8-72/80F in operation in a manufacturer’s demonstration video.

About the size of two vending machines, this pick-and-place system features an operator screen, rows of component tape feeders, and a fast-moving placement head that works over a circuit board.

At the center of the machine is an eight-head placement system mounted on a horizontal gantry.

The circuit board moves into the work area on a conveyor.

Once inside, cameras locate reference marks on the circuit board so the machine knows its exact position.

The placement program then directs the moving heads to put each part at the correct X-Y location.

Several pickup heads work in rapid sequence as the machine gathers tiny electronic components from reels of carrier tape and places them onto the circuit board.

Feeders advance the carrier tape in small steps so each part is presented in the correct pickup position.

The machine uses vacuum nozzles to lift the parts, while its camera system checks their position and orientation before placing them on the board.

The entire process is controlled through an operator panel with a computer monitor, where jobs are loaded, feeder positions are assigned, and machine operation is monitored.

Safety panels enclose the work area while still allowing the operator to watch component placement on the circuit board through a viewing window during production.

The SMT HW-T8-72/80F can hold up to 80 feeders and is designed to handle a wide range of small electronic parts used on today’s circuit boards.

It works with standard digital files from design software, such as bills of materials and centroid files, which provide the X-Y coordinates and rotation data needed to place parts on a circuit board.

It also uses PCB layout data derived from Gerber files.

The files are named for the Gerber Scientific Instrument Co., which developed the format to describe a circuit board’s physical layout, including copper traces, connection pads, solder mask, and printed markings used in manufacturing.

These files help tell the machine where components go and how the job should be set up.

The process begins after the solder paste has been applied to the circuit board.

The machine places the component parts onto the pasted surface before the assembly moves to reflow, where heat melts the solder and forms permanent electrical connections.

After that, it is inspected, often by automated optical inspection, or AOI, and any needed touch-ups or rework are completed before final testing.

The main advantage of the SMT HW-T8-72/80F pick-and-place machine is its ability to place parts consistently, with speed, precision, and reliability.

In practice, success relies not only on the machine but also on careful setup.

Feeder positions must be planned, reels kept readily accessible, nozzle types matched to the corresponding parts, and circuit board alignment verified before full production begins.

These steps help prevent defects such as tombstoning, where one end of a component lifts off the board; skew, where a part is crooked; and solder bridging, where solder connects two points that should remain separate.

A slow first-article run ensures proper part orientation and placement before full production, after which production speed increases.

The SMT HW-T8-72/80F can place nearly 11,000 components per hour under ideal conditions, but actual speeds are usually lower because of part size and board complexity.

Its positioning accuracy is rated at plus or minus 0.0004 inch, and it can handle parts as small as 0201 components, roughly 0.008 by 0.004 inch, as well as larger parts used in industrial electronics.

To guide component placement, the machine uses an eight-camera vision system to locate the board’s reference points, while vacuum nozzles pick up and place each component.

With the rise of automated board assembly, traceability became crucial because speed is only valuable if the system can track the parts used and their correct placement.

Daniel developed and implemented two in-house artificial intelligence software systems that run entirely on local hardware, with no cloud connectivity.

The first is an automated optical inspection system, or AOI, that uses a high-resolution camera to capture images of assembled circuit boards and compare them with known-good references.

Its software then uses a neural network and other AI tools to identify missing parts, misaligned components, and solder defects.

Circuit boards flagged by the AI system are sent to a human operator for review and confirmation.

The second AI system manages inventory and production.

It scans components as they arrive, capturing part numbers, quantities, lot codes, date codes, and other details from standardized 2D barcodes to reduce manual-entry errors and improve traceability.

Each reel is tracked individually, including partial reels, so the system always knows exactly what is available for production.

When a component barcode is damaged, AI-assisted fallback tools can restore the missing data, maintaining full traceability of each component from receipt through storage to production.

Additionally, these AI tools verify the availability of required parts before a job begins and identify potential shortages to prevent production delays.

Both AI systems operate fully on-site, so there are no cloud data transfers or subscription fees.

These systems ensure that every completed circuit board and module meets stringent quality and reliability standards.

The pick-and-place machine assembles the boards, while Daniel’s production team uses in-house AI and separate computing systems to monitor inventory, maintain detailed records, and oversee production operations.

Behind much of today’s advanced technology are skilled people like my son and grandson, who help produce the customized circuit boards, electronic control modules, and other electronic assemblies industry relies on every day.