Tweet This! :)

Thursday, June 19, 2025

The Leich Dial System kept the town talking: part two

@Mark Ollig


From 1960 to 1986, the Leich (pronounced “like”) electromechanical all-relay dial system processed phone calls for the subscribers of the Winsted Telephone Co.

The main distribution frame (MDF), a two-sided steel frame approximately 15 feet long and 10 feet high, served as the physical connection point between outside telephone line cable pairs (external network) and the Leich call processing switch.

The back side (or vertical side) of the MDF, where the outside cable pairs terminated, contained rows of terminal blocks attached vertically along the full height of the frame.

A terminal block was a rectangular block of insulating Bakelite (the first synthetic plastic), approximately 10 inches wide and seven inches deep, fitted with rows of metal lugs that provided dedicated, fixed points for soldering telephone jumper wires.

The cable pairs were soldered to the left-hand metal terminal posts (lugs) of a vertical terminal block.

The corresponding right-hand terminal posts were left open, ready to be cross-connected with a jumper wire to the Leich switch when a specific cable pair was assigned to a new subscriber.

All cable pairs were wired through a two-stage protection system to handle two distinct electrical threats.

The primary defense against high-voltage surges was a carbon block protector, also known as a lightning arrester.

It featured a small air gap that would arc during a surge, creating a path to safely divert the dangerous voltage to the office grounding system and protect the Leich switch.

The second stage used a heat coil to guard against “sneak currents,” which are sustained, low-voltage overcurrents too weak to cross the air gap.

Unable to arc across the carbon protector, the current flowed through the coil, melting a solder pellet to release a spring-loaded mechanism that grounded the line and protected the Leich switch.

On the frame’s front side (or horizontal) side, wiring from the various Leich switch circuits was permanently soldered underneath ten horizontal rows of individual terminal blocks spanning the length of MDF.

Dedicated terminal blocks included line finders, hundreds groups, party-line assignments, trunking, and miscellaneous blocks.

We soldered the MDF cross-connections using “jumper wires,” typically 22 AWG or 24 AWG solid copper.

Jumper wires for subscriber lines on the vertical side of the MDF were wired to their assigned terminal block circuits on the horizontal side.

To provision a line, we first used rosin-core solder to connect one end of a jumper wire to the subscriber’s assigned cable pair on the vertical side of the MDF.

This first jumper was run to the horizontal side and connected to the subscriber’s assigned line finder circuit terminal block.

From that same terminal, a second jumper was then run to the connector block associated with the last four digits of the telephone number.

For an assigned number like 485-4111, this second jumper was terminated on the ‘41’ hundreds-group block and soldered to the specific terminal lugs representing ‘11’, completing the physical path to the correct Leich connector circuits.

For multiparty lines, an additional, separate sleeve wire was run from the associated hundreds group to a dedicated terminal block for party-line assignments.

To minimize the risk of short circuits from splattering solder, we placed a heavy canvas apron over the lower terminal blocks.

Today, MDFs use solderless wire wrap terminations.

When a subscriber lifted the handset, it completed a circuit that signaled the Leich switch.

A line finder would then connect the subscriber’s line to the first available link relay, which provided a dial tone.

After the last dialed digits, the call was sent through the Leich switch.

The voice call path activated the movement of thin, gold-dipped metal crosspoint contacts (yes, real gold) to ensure voice-quality consistency.

Leich documentation described them as “bar-type twin contacts of precious metal.”

I recall the sound of the distinctive “buzz” dial tone generated by a vibrating reed interrupter inside a sealed metal container powered by 48 VDC.

We described the dial tone as sounding like “a bee in a can.”

Many of the relay bars featured Sylvania lamps to indicate the status of active switching selector links, which also aided when troubleshooting.

I recall replacing many of those lamps when one burned out.

In 1981, the Winsted Telephone Co. installed dual-tone multifrequency (DTMF) equipment, allowing the Leich switch to decode tones from subscriber touch-tone keypads.

As a separate but related project that same year, the Leich switch’s “bee in a can” dial tone was replaced with a precise, dual-frequency tone (350 Hz and 440 Hz), as specified in the Bell System’s precise tone plan.

In 1986, Winsted Telephone Co. upgraded from its 26-year-old Leich platform with a Northern Telecom DMS-10 digital call-processing switch.

The DMS-10 used pulse code modulation (PCM) and time division multiplexing (TDM), with programming done through a VT-100 terminal.

That same year, we installed a new MDF using 88 Series Terminal Blocks with wire wrap termination – using, you guessed it, a wire wrap tool to securely wrap jumper wires around the metal terminal posts.

I was certified on the DMS-10 and maintained it for many years; I found it to be an excellent voice-switching platform.

Today, Winsted TDS Telecom telephone subscribers use the Metaswitch platform.

The different kinds of systems used through the years.
The Metaswitch is a “softswitch” that employs Voice over Internet Protocol (VoIP) technology.

The voice packets are transmitted with transport layer security (TLS) for signaling and secure real-time transport protocol (SRTP) for voice encryption.

I was certified on the Metaswitch platform and spent eight years working with it prior to my retirement.

Reflecting on my 13 years with the Leich switch, one memory that stands out is giving tours to students from the local Winsted schools.

They were fascinated by the telecommunications equipment, attentively watching and listening to the rhythmic clicking of the relays as they observed the blinking lights of the selector links.

The students paid close attention as we demonstrated how telephone calls were processed, and they especially liked seeing where their phone’s dial tone originated.

Yes, the old Leich Dial System kept the town talking.



































Friday, June 13, 2025

The Leich Dial System kept the town talking: part one

@Mark Ollig


The Leich (pronounced “like”) Dial System was a telephone call switching platform manufactured by Leich Electric Co. during the 1950s and 1960s.

At the core of the Leich Dial System were its relay bars, which its 1959 catalog advertised as a “jacked-in design permitting easy installation, reconfiguration, and maintenance by simply removing or inserting the entire bar into a shelf slot.”

Averaging 22.5 inches in height and 2.5 inches in width, each relay bar weighed between 10 and 15 pounds and contained numerous relays and electrical components on its front side.

The backside contained a central terminal connector with two parallel rows of copper contacts.

This design enabled the relay bar to be inserted directly into, or removed from, a series of corresponding jacks on the shelving bay’s backplane, providing a secure, solderless connection.

Relay bars were configured as hundred-group line finders and connectors, relay links, first, second, and fifth selector switches, fire bars, interoffice trunking circuits, and more.

In 1960, the Winsted Telephone Company installed the Leich TPS (Terminal Per Station) electromechanical voice switch.

My father, John Ollig, his brother Jim, and Kenny Norman from the telephone company, along with Bud Miller from General Telephone and Electronics and three technicians from Automatic Electric, completed the installation.

The Leich switch replaced the Winsted Telephone Company’s 1940s-era Wilcox Electric step-by-step system, which used rotary selectors.

The Leich equipment bays, measuring three feet wide and seven feet, five inches high, were located in the secure dial room of the telephone office, which featured brick walls and a 10-foot ceiling.

Arranged in three 25-foot aisles, the overhead bay interconnected shelving cables were neatly bundled, often sewn together with twine, and supported in racks about six inches above the bay cabinets.

These cables were the physical medium that established a coordinated voice-switching platform.

Each bay shelf had transparent Plexiglas covers in aluminum frames to protect and provide a visual display the relay bars.

The Leich switch housed hundreds of relay bars, with more added over the years as new bays and shelves were installed to support the growing number of telephone subscribers.

The switch processed the dialed digits from rotary telephones at a speed of eight to 12 pulses per second and supported telephone line loop resistances up to 1,200 ohms.

In 1960, the Leich switch used a ringing generator supplying 70 to 106 volts of alternating current (AC) at a frequency of 20 cycles per second, now referred to as hertz (Hz).

During ringing, this AC voltage was superimposed through a cut-through relay to the telephone line and into the subscriber’s telephone.

Winsted Telephone Company equipped all their telephones with single-frequency 20 Hz ringers, and subscribers leased their phones directly from the company.

On a private line, the Leich switch sent the 20 Hz AC ringing voltage across the telephone line’s tip and ring wires to activate the telephone’s ringer.

For a two-party line, however, the switch achieved selective ringing by sending this same 20 Hz AC ringing voltage down either the “tip” or “ring” wire side, activating only the telephone specifically wired to respond to that side.

For four-party lines, the Leich system, combined with specific in-telephone wiring and central office configuration, enabled selective ringing.

This ringing configuration was achieved by creating a “polarized” ringer in each telephone, which involved wiring each phone with an inductor to respond to the 20 Hz ringing frequency and a cold-cathode vacuum tube to verify the ringing voltage’s positive or negative polarity.

The Leich switch leveraged the sleeve lead (single wire) party positions at the main distribution frame to send “divided ringing” (selecting either the tip or ring wire) and apply a specific DC polarity.

This combination generated four unique ringing configurations, ensuring only the intended telephone on the shared line would ring.

On eight or 10-party lines, the Leich system used coded ringing, which involved applying the ringing voltage in a unique pattern of short and long bursts assigned to each subscriber.

The Winsted Telephone Company’s central office battery room contained 24 large lead-acid cells housed in thick, rectangular glass containers known in the industry as battery jars.

Each 150-pound cell contained heavy lead plates immersed in sulfuric acid.

The glass jar provided a stable, non-reactive barrier to safely contain the corrosive acid while also allowing for visual inspection of electrolyte levels.

Wired in series, these 24 cells formed a single battery, with each cell contributing 2.1 to 2.2 volts DC to provide a total voltage of 50.4 to 52.8 volts DC.

The central office power plant used a rectifier to convert commercial AC to DC for powering the Leich switch’s call-processing systems and ancillary devices.

It maintained a 50 to 54 volt DC float charge on the 24-cell battery, ensuring the Leich switch operated within its required 44 to 54 volt DC range.

This float charge ensured that the battery remained fully charged, allowing the Leich switch and connected subscriber telephones to continue operating even during a commercial power outage.

Anyone with a Winsted party line might recall this announcement: “You have dialed a subscriber on your own line; please hang up to allow their telephone to ring. Thank you.”

Leich kept the whole town of Winsted talking.

Next week: part two.





Friday, June 6, 2025

The journey to Direct Distance Dialing

@Mark Ollig


The Bell Telephone Company was founded July 9, 1877, in Boston, MA, by Alexander Graham Bell, his father-in-law Gardiner Greene Hubbard, and financial backer Thomas Sanders.

It merged with the New England Telephone and Telegraph Company Feb. 17, 1879, to become the National Bell Telephone Company.

The company would manage the production, leasing, and installation of telephones and exchanges through the use of Bell’s patents.

The National Bell Telephone Company merged with the American Speaking Telephone Company to form the American Bell Telephone Company March 20, 1880, which became the central corporate entity for Bell interests in the US.

AT&T (American Telephone and Telegraph) was incorporated March 3, 1885, as a subsidiary of the American Bell Telephone Company.

In 1899, AT&T acquired the assets of the American Bell Telephone Company, becoming the parent company of the Bell System.

AT&T laid the foundation for a nationwide telephone network that began as an infrastructure of local telephone exchanges wired to open galvanized iron wires, similar to telegraph lines, strung and fastened on glass insulators attached to the wooden crossarms of telephone poles.

This early infrastructure formed a long-distance backbone between towns and cities, including New York City and Boston in 1885 and Chicago in 1892.

By 1920, the US network had more than 13 million telephones and approximately 32 million miles of telephone wire in use, forming a coast-to-coast network for long-distance calls.

Before 1951, making long-distance calls required telephone operators in different cities along the telephone network to manually patch connections through multiple switchboards, a process that could be time-consuming.

During the 1940s, efforts were already underway to reduce the time it took to process a call through the use of automated telephone switching equipment.

My mother and grandmother were switchboard operators in the 1930s and 1940s; my mother worked in Silver Lake, and my father’s mother worked in Winsted.

They told me stories of people calling the switchboard to ask who had died, where the fire was, why the church bells were ringing, and of using paper index cards to log patched calls for billing.

During the 1940s, when my mother operated the switchboard in Silver Lake, my father occasionally operated the Winsted switchboard.

They would talk with each other while patching calls between the two towns; they were married April 14, 1951.

In 1947, AT&T, along with the Bell System and independent telephone companies, developed the North American Numbering Plan (NANP).

This plan standardized telephone numbering, eventually leading to the development of direct distance dialing (DDD).

New Jersey was assigned to area code 201, and Winsted, 612.

Although the NANP would enable DDD, individual customer dialing was not yet available.

Significant upgrades to the telephone network were necessary to achieve DDD, including the implementation of Multifrequency (MF) signaling and automated toll switching equipment.

AT&T’s Long Lines division began installing relay-logic toll-switching equipment to manage calls that included area code prefixes, which enabled the expansion of DDD.

Operators switching from using rotary dials to MF pushbutton keypads significantly reduced the time it took to process long-distance calls.

After receiving the number to be called, an operator keyed in the digits using the MF keypad, which transmitted them to the telephone company’s toll office’s “sender equipment,” which processed the digits for automatic routing to their destination.

Note: MF signaling is different from what is used with your pushbutton touchtone phone, which sends DTMF (dual-tone multi-frequency) digit signaling.

MF was not used on the subscriber’s telephone; around 1962, the Western Electric No. 5 Crossbar underwent modifications to process DTMF digits.

The No. 5 Crossbar system was using MF signaling for trunk-to-trunk calls between telephone exchanges.

Up until Nov. 9, 1951, individual telephone subscribers were still unable to dial long-distance calls directly from their phones, but that was about to change.

A historic first in telecommunications occurred at 1 p.m. Saturday, Nov. 10, 1951, when Mayor Melvin Leslie Denning of Englewood, NJ, completed the first coast-to-coast, direct-distance-dialed telephone call without operator assistance.

The call was made to Mayor Frank P. Osborn in Alameda, CA, from a rotary dial telephone on a desk in the central telephone switching room at the New Jersey Bell offices in Englewood.

Denning dialed Mayor Osborn’s 10-digit phone number, starting with area code 415 (Oakland/Alameda).

Note: In the early to mid-1960s, the “1” prefix became necessary for long-distance dialing to resolve call routing conflicts because the first three digits dialed could be interpreted either as an area code (NPA) or a local telephone exchange office code (NXX).

For instance, if a customer in the 612 area code dialed 218-xxxx, the telephone network would not be able to distinguish whether it was a local call within the 612 area code or a long-distance call to the 218 area code.

By the way, 612-218 is a real NPA/NXX for the Twin Cities.

The “1” prefix allowed telephone switching equipment to differentiate between 10-digit long-distance calls and 7-digit local calls within a central office exchange.

But I digress.

Denning’s call was processed through a modified Western Electric No. 5 Crossbar switching system equipped for automatic digit analysis and routing, along with automatic message accounting (AMA), a paper-tape billing system used to track the call’s details.

About 18 seconds after Mayor Denning placed his call to Alameda, Mayor Osborn’s phone rang.

Upon answering, Osborn heard Denning ask, “Hello. How’s the weather out there?”

“Fine,” Osborn replied and joked if it is true that “people in New Jersey ride mosquitoes the same as we ride horses out here?” to which Denning chuckled – direct distance dialing’s journey had truly begun.

“The Nation at Your Fingertips” is a 1951 Library of Congress video (https://archive.org/details/the-nation-at-your-fingertips-1951) of an AT&T promotional film on telecommunications in Englewood, NJ.



Friday, May 30, 2025

The journey from ALOHAnet to Ethernet: A LAN is born

@Mark Ollig


In the late 1960s, Professors Norman Abramson and Franklin Kuo at the college of engineering at the University of Hawaii created ALOHAnet, an early wireless data network using radio frequencies.

By June 1971, ALOHAnet had become operational, providing inter-island wireless access to the University of Hawaii’s central mainframe computer.

ALOHAnet’s use of randomized access allowed multiple users to share the same radio channel efficiently.

It also enhanced wireless data traffic management by enabling devices to transmit data immediately and resolving signal collisions through randomized retransmissions.

In 1972, Xerox Palo Alto Research Center (PARC) in California began developing the Alto computer, explicitly as part of their vision for “the office of the future,” which included wired networked personal computers and shared resources like printers.

That same year, Robert Metcalfe joined Xerox PARC to develop a wired network that linked Alto computers and shared devices such as printers.

Also in 1972, Metcalfe proposed his doctoral thesis at Harvard, focusing on connecting the Massachusetts Institute of Technology’s (MIT) mainframe computer to the Advanced Research Projects Agency’s (ARPANET) and analyzing its performance.

His initial thesis was rejected by his Harvard dissertation committee, which stated that “it wasn’t theoretical enough.”

To address this, Metcalfe studied Norman Abramson’s 1970 paper on the ALOHAnet system and incorporated its mathematical analysis of random access protocols into his revised thesis.

After visiting Hawaii to learn firsthand about ALOHAnet’s random access protocols, he constructed mathematical models to improve the academic accuracy of his work.

Metcalfe then revised his thesis, “Packet Communication,” which was accepted, and he earned his PhD in 1973.

While at Xerox PARC May 22, 1973, he wrote an internal memo officially titled “Alto Ethernet,” sometimes informally referred to as “Ether Acquisition” in later sources. In it, he proposed a shared 50-ohm coaxial cable to connect devices like the Alto and PDP-11 in a tree-structure topology.

The PARC internal memo begins: “The ether network. We plan to build a so-called broadcast computer communication network, not unlike the ALOHA system’s radio network, but specifically for in-building minicomputer communication.”

In November 1973, Xerox PARC created the first Ethernet prototype using a 50-ohm coaxial cable.

This prototype local area network (LAN) achieved a data transmission speed of 2.94 Mbps.

In 1973, Robert Metcalfe coined the term “Ethernet,” inspired by the “luminiferous ether,” which was believed to carry light waves.

He used it to describe the shared coaxial cable that transmits data between computers, likening it to how the ether carried light to all.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is the protocol Metcalfe used in early Ethernet networks to manage access to a shared medium and detect data collisions.

Devices listen for traffic before transmitting, and if a collision occurs, the protocol uses a randomized backoff algorithm to retry transmission after a delay.

The transition from coaxial began in the late 1980s with the introduction of 10BASE-T in 1990.

This standard utilized Category 3 unshielded twisted-pair (UTP) cabling in a star topology, allowing each device to connect to a central hub or switch, which provided more flexibility and cost-effectiveness.

Vampire taps are physical connectors that attach computers and printers to Ethernet cables without interrupting the network.

They work by piercing (biting) into the coaxial cable’s insulation to connect directly to the copper conductor without cutting the main cable.

Vampire taps remind me of my days splicing telephone wires with 3M™ Scotchlok™ connectors at the local telephone company.

These connectors allowed telephone wires to be inserted with their insulation intact, speeding up the splicing process.

I mostly used the UR (red) connector, which has three ports and is used for splices joining two or three cut solid copper wires ranging from 19 to 26 AWG. It is a gel-filled connector designed to be durable and moisture-resistant for long-term reliability.

The UG (green) connector is specifically designed as a tap splice; it allows a new telephone wire to be connected to a continuous, uncut line, making it ideal for tapping into existing circuits.

For thicker wires, the UO (orange) connector, model U1O, is a gel-filled, moisture-resistant butt splice for two wires ranging from 18 to 14 AWG.

It’s been more than 30 years since I last spliced telephone wires using 3M Scotchlok UR connectors, and many of those splices are still in service.

First, I’d prepare the “joint,” (the specific point where the wire ends meet to be connected) by twisting the wires together one full turn.

Then, I would cut the wire ends evenly to about one inch and not strip the insulation, as the connector is designed for insulated wires.

Holding the UR connector with its red button facing down, I would insert the unstripped wires all the way into the individual ports: two ports for two wires, and three ports for three wires.

To complete the splice, I’d firmly crimp the red button using a Scotchlok E-9 series tool – I often called it the “Scotchlocker.” This action caused the sharp metal plate inside the UR to “bite” through the insulation and into the copper wires, creating a secure electrical connection.

The splicing procedure would be much the same for the UG, UY, and UO connectors.

In 1975, Xerox filed a patent application titled “Multipoint data communication system with collision detection” (US Patent 4,063,220, granted in 1977).

In a 2019 United States Patent and Trademark Office (USPTO) “Journeys of Innovation” interview, Robert Metcalfe credited ALOHAnet by stating, “And the key idea was to use randomized retransmissions.”

And so, ALOHAnet assisted in the birth of Ethernet’s LAN.




Friday, May 23, 2025

ALOHAnet: the dawn of the wireless computing age

@Mark Ollig


Developed at the University of Hawaii nearly 57 years ago, ALOHAnet pioneered the random access wireless protocols that enable your smart device’s Wi-Fi connection.

Cellular and satellite communications also owe a debt of gratitude to ALOHAnet.

The ALOHAnet project began in September 1968 at the University of Hawaii on the island of Oahu.

The university’s remote campuses on Maui, Hawaii Island, and Kauai faced the challenge of providing access to its central mainframe computer (IBM System/360 Model 65) located in Mānoa Valley, on Oahu.

In the late 1960s, students on these islands accessed the main campus computer through remote terminals linked by copper telephone lines.

They used devices like teletypes (TTYs), including the Model 33 and Model 35, which had keyboards and printed paper output, as well as early video display terminals such as the IBM 2260 and DEC VT05 to connect to the university’s central computer for processing.

The local telephone network, designed for analog voice communication, sometimes struggled with data transmissions from the terminal modems, which converted digital signals into analog for transmission.

The limitations of the telephone network drove the development of a radio-based system, which led to the development of ALOHAnet, as University of Hawaii professor Norman Abramson addressed in the Feb. 1, 1972, Honolulu Advertiser.

Another consideration was the expense of inter-island telephone calls, as terminal users sometimes needed seconds of computer time but were billed for three minutes.

Professors Norman Abramson and Franklin Kuo from the University of Hawaii developed ALOHAnet, a wireless data system using radio frequencies.

They introduced random access protocols that enabled multiple devices to share a single radio channel, laying the foundation for modern technologies.

ALOHAnet was developed as a proof-of-concept network to connect Oahu with other campuses in the Hawaiian Islands via wireless, radio frequency channels.

Both held doctorate degrees.

Abramson earned a bachelor’s and master’s in physics before obtaining his doctorate in electrical engineering from Stanford University.

Kuo received his BS, MS, and PhD degrees in electrical engineering from the University of Illinois Urbana-Champaign.

ALOHAnet transmitted data to Hawaiian schools through a radio channel linked to an IBM System/360 Model 65 mainframe.

An HP 2115A minicomputer, called the “Menehune,” acted as the central communication processor and network gateway.

According to Franklin Kuo’s 1981 system diagram, the Menehune managed data traffic across two 100 kHz (kilohertz) UHF (ultra-high-frequency) radio channels, 407.350 MHz (megahertz) and 413.475 MHz.

ALOHAnet’s design prioritized simplicity with direct packet bursts on fixed radio frequencies (channels).

In 1969, the team chose a cost-effective fixed-frequency approach over spread spectrum technology, which required complex and expensive hardware.

While spread spectrum would later become vital for military and commercial wireless, ALOHAnet’s fixed-frequency method made packet radio practical for academic use at the time.

The Menehune also functioned as a multiplexor/concentrator, similar to the interface message processors (IMPs) of ARPANET (Advanced Research Projects Agency Network), a pioneering packet-switching network and internet predecessor.

University of Hawaii engineering students gained hands-on experience by developing components for ALOHAnet, including its “communications modules,” or terminal control units (TCUs).

They connected their teletypes and video display terminals to these TCU modules.

These TCUs managed the wireless transmission of terminal data by applying ALOHAnet’s radio access protocols.

Dr. Abramson’s 1970 paper, “The ALOHA System,” described ALOHAnet as the first packet-radio network.
It detailed a 24,000 baud channel with 704-bit packets lasting 29 milliseconds and explained how terminal control units (TCUs) formatted these packets and managed re-transmissions.

ALOHAnet originally operated at 24,000 bits per second (bps), but speeds often dropped to 9,600 bps.

By June 1971, ALOHAnet was up and running with its random access protocol. Students on the islands were sending data packets via radio from their teletypes and CRT terminals.

If a data packet collided, the terminal would wait a random amount of time before re-transmitting until the data was successfully sent.

The Feb. 1, 1972, Honolulu Advertiser article explained that ALOHAnet could support more than 500 users on a single radio channel to the central computer.

ALOHAnet thus confirmed that multiple users could efficiently share a radio channel via simple, decentralized rules, eliminating the need for complex centralized management.

In December 1972, ALOHAnet became the first satellite node of ARPANET, linking the University of Hawaii to NASA’s Ames Research Center in California via a 50 kbps channel through the INTELSAT IV F-4 satellite.

This satellite connection, funded by ARPA, enabled ALOHAnet users to access ARPANET resources.

Today, the INTELSAT IV F-4 satellite is no longer operational, but it remains in orbit around Earth.

In the fall of 1976, ALOHAnet ceased operations when funding from the US government sponsors, primarily ARPA and NASA, ended.

The Institute of Electrical and Electronics Engineers (IEEE) hosted a virtually attended (due to COVID-19) event Oct. 13, 2020, from the University of Hawaii at Mānoa to commemorate ALOHAnet’s contributions to wireless communication.

Key speakers included IEEE leaders, Dr. Norman Abramson and Dr. Franklin Kuo.

Vint Cerf, acknowledged as “the father of the internet,” also spoke, recognizing ALOHAnet’s pioneering advancements.

Dr. Norman Manuel Abramson died Dec. 1, 2020, in San Francisco at the age of 88.

Dr. Franklin Kuo became a professor emeritus at the University of Hawaii in 2021 and is reportedly still with us at 91.

In June 2021, an IEEE commemorative plaque recognizing ALOHAnet was installed at the University of Hawaii’s Holmes Hall.

For more information on ALOHAnet, visit https://bit.ly/44LQ85a.

Aloha.



Friday, May 16, 2025

AI data network: bits to terabits

@Mark Ollig


In March, AT&T achieved a data transmission speed of 1.6 terabits per second (tbps) on a single fiber-optic wavelength.

At that speed, one could transfer a mind-boggling 200 gigabytes of data per second.

Data was transmitted at 1.6 tbps over 184 miles of AT&T’s fiber network from Newark, NJ, to Philadelphia, PA, and was managed using Ciena Corp.’s WaveLogic 6 Extreme optics and DriveNets’ software-defined networking.

This 1.6 tbps data ran parallel to live customer traffic on existing 100-gigabit (gbps) and 400-gigabit systems, proving terabit speeds can coexist with current network traffic.

In November 2024, Verizon also transported 1.6 tbps of data over its 73.3-mile metro fiber network route using Ciena’s WaveLogic 6 Extreme technology and nine reconfigurable optical add-drop multiplexers.

Traveling back to the 1870s, French telegraph engineer Émile Baudot invented a multiplexed printing telegraph system, allowing simultaneous multi-message transmission on one telegraph line.

He developed the five-bit Baudot code for alphanumeric characters, where each of 32 unique combinations was represented by five equal-duration ‘on’ or ‘off’ signals.

This fixed-length method made transmissions faster, more reliable, and standardized compared to the Morse code, which used varying lengths of dots and dashes.

Patented in 1874, Baudot’s telegraph system significantly improved telegraphic communication.

The modem (modulator-demodulator), developed in 1949 at the Air Force Cambridge Research Center, converts digital data into sounds and vice versa over regular telephone lines.

The US military first used modems to transmit radar signals.

During the 1950s and 1960s, modems connected computers and teletypewriter (TTY) terminals to remote mainframe computers over telephone lines.

Bell Labs’ 1958 Bell 101 SAGE modem, used in US military air defense, operated at 110 bits per second (bps) or in this instance (baud) to enable data communication over phone lines.

It converted digital data to analog audio signals using frequency-shift keying (FSK).

This technique encoded binary “bits” via distinct audio tones; a specific frequency (in hertz) indicated a “1” bit, another a “0.”

Since each FSK tone (symbol) in the Bell 101 modem represented one bit, its symbol rate of 110 baud equaled its bit rate of 110 bps.

The unit of symbol rate was named “baud,” after Émile Baudot’s contributions.

In 1962, Bell Labs introduced the Bell 103 modem, which replaced the Bell 101.

The Bell 103 operated at 300 bps, nearly three times faster than the Bell 101. It used FSK encoding, with each symbol representing one bit, so its symbol rate of 300 baud matched its bit rate of 300 bps.

The Bell 103 improved data transmission efficiency and speed over analog phone networks, and it was widely used by corporations, government agencies, universities, and early remote computing service providers.

During the 1960s and 1970s, the Winsted Telephone Company, where I worked, installed Bell 103 and other modems for local businesses over dedicated telephone lines.

The International Telecommunication Union’s Telecommunication Standardization Sector (ITU-T) modem speed over the years included:

  • 1980: The Bell 212A and ITU-T V.22 standards supported 1,200 bps full-duplex (simultaneous two-way data transmission).
  • 1984: The ITU-T V.22bis standard supported 2,400 bps full-duplex.
  • 1988: The ITU-T V.32 standard supported up to 9,600 bps, with fallback to 4,800 bps.
  • 1991: The ITU-T V.32bis standard supported speeds from 4,800 to 14,400 bps.
  • 1994: The ITU-T V.34 standard supported up to 28,800 bps (28.8 kbps).
  • 1996: The V.34+ update (also known as V.34 Annex 12) supported up to 33.6 kbps.
  • 1998: The ITU-T V.90 standard supported download speeds up to 56 kbps and upload speeds up to 33.6 kbps.

Dial-up bulletin board services (BBSs) thrived as data speeds improved, allowing users to enjoy being “online.”

BBSs enabled message exchange, gaming, file downloads, email, news reading, content sharing, tech skill learning, and community interaction.

Some will remember commercial BBS services like CompuServe, Prodigy, and AOL, along with hobby BBSs like my own, “WBBS Online.”

You’ll also likely recall the loud modem screeches during dial-up connections and shouts of, “Hey! Hang up. I’m online,” when someone picked up an extension phone.

Many people now have access to broadband, which the Federal Communications Commission defines as 100 mbps download and 20 mbps upload. Urban areas often provide faster gigabit “Gig” (gbps) internet via fiber.

Telecom and internet providers are upgrading their backbone, metro, and data center networks to achieve terabit speeds, using optical networking transport solutions from companies like Cisco, Ciena, Nokia, Juniper Networks, Ericsson, Infinera, and Corning.

From March 3 to 6 of this year, at the Mobile World Congress in Barcelona, Jio Platforms, AMD, Cisco, and Nokia discussed the “Open Telecom AI Platform,” which uses artificial intelligence (AI) to enhance telecommunication carriers’ optical network operations.

The telecommunications industry’s migration from legacy circuit-switched digital platforms to IP-based software-defined networking (SDN) enhances network management flexibility, scalability, and efficiency.

These solutions include adopting cloud-native session border controllers and virtualized network processes in modern cloud and SDN architectures.

Before I retired from the telecom industry, I saw AI’s initial adoption across optical networks, cloud servers, and software-defined switching platforms.

I worked with the GTE/Leich electromechanical relay central office telephone switch, the Nortel Digital Multiplex System (DMS) 10/100/250/500 circuit-switched telephone exchange switch, the Siemens DCO (digital central office) electronic telephone switch, and the Metaswitch, my final voice switching platform.

The Metaswitch is a provisioning softswitch that enables Voice over IP (VoIP) services for residential and business customers.

It is a software-based replacement for the legacy telephone switches I was decommissioning before my retirement.

Telecommunications companies are using AI and machine learning (ML) to improve network efficiency and reliability for data-intensive services working with the networking suppliers I previously mentioned.

AI enhanced telecom networks can autonomously forecast data traffic, detect faults, perform predictive maintenance, and identify real-time anomalies indicating errors, fraud, or security breaches.

AI accelerates dynamic data rerouting for telecommunication voice traffic to prevent congestion and equipment software issues, enhances data security, supports ongoing network optimization, and promotes self-optimizing networks (SON).

Many feel AI may ultimately lead to fully autonomous network operations; I am one of them.

Along with AI, we are witnessing lightning-fast data speeds.

As the 1.6 tbps transfer rate of 200 gigabytes of data would take just one second, using the 63-year-old 300-bps Bell 103 modem would be a surprising 169 years.

The data transfer rate and type of AI being used 63 years from now would undoubtedly seem magical to those of us living today.
(ChatGPT generated image based on my text input)



Friday, May 9, 2025

Webb looks into the universe

The Space Telescope Science Institute hosted a workshop in Baltimore, MD, in mid-September 1989 to discuss the successor to the Hubble Space Telescope.

The workshop involved 130 astronomers and engineers, with NASA participating to define the requirements for a new telescope originally called the Next Generation Space Telescope.

In 2002, NASA administrator Sean O’Keefe renamed the telescope the James Webb Space Telescope (JWST) to honor Apollo-era NASA administrator James Webb.

In 2004, NASA led the JWST (also called the Webb or the Webb Telescope) project, supported by European and Canadian space agencies.

The Webb telescope has a 21-foot four-inch primary mirror, the largest sent to space.

Made of lightweight beryllium O-30 powder, it consists of 18 hexagonal segments that maintain stability at very low temperatures.

Each segment (one of the 18 individual hexagonal mirrors that fit together to form the primary surface) features seven actuators (motors) for precise shape adjustments.

All 18 mirror segments have a thin gold coating (only about 48 grams in total) to reflect infrared light while minimizing weight.

The two outermost sunshield layers facing the sun are additionally coated with doped silicon for optimal heat reflection.

The sunshield design protects the telescope from extreme temperatures, ranging from 230 degrees Fahrenheit on the hot side to minus 394 degrees Fahrenheit on the cold side, ensuring its sensitive infrared instruments remain operational.

NASA Goddard managed the project, with Northrop Grumman of Redondo Beach, CA, as the prime contractor.

Ball Aerospace, in Broomfield, CO, developed the optical system and mirrors for the JWST.

Three Minnesota companies contributed to the Webb telescope: Multek-Sheldahl Brand Material of Northfield, Minco Products Inc. of Minneapolis, and ION Corp. of Eden Prairie.

The Webb is both a telescope and a spacecraft.

Equipped with solar panels, antennas, propulsion thrusters, thermal control, navigation systems, and data handling capabilities, the JWST efficiently operates in space.

Its propulsion thrusters use hydrazine fuel and dinitrogen tetroxide oxidizer for orbit and attitude adjustments.

NASA designated the Space Telescope Science Institute in Baltimore as Webb’s Mission Control and Science and Operations Center.

The James Webb Space Telescope was launched Dec. 25, 2021, at 7:20 a.m. ET from the Guiana Space Centre in French Guiana, located on the north Atlantic coast of South America.

It traveled aboard an Ariane 5 rocket, which stands 171 feet tall and generated 2.9 million pounds of thrust.

At about 3.5 minutes after launch, the fairing enclosing the seven-ton JWST opened at an altitude of roughly 68 miles above Earth.

Twenty-seven minutes after liftoff, at an altitude near 870 miles, Webb separated from the upper stage; shortly thereafter, it deployed its solar panel and established communication with mission control.

The JWST began its journey, arriving at the Sun-Earth Lagrange point (L2) Jan. 24, 2022, after deploying its sunshield and mirrors along the way.

It settled into a halo orbit around the second Sun-Earth Lagrange point (L2), approximately 930,000 miles from Earth.

This specific orbit allows Webb to balance the gravitational forces from the sun and Earth, providing an ideal vantage point for viewing the universe.

The Webb telescope officially began its mission to explore the universe July 12, 2022.

The JWST communicates with Mission Control through NASA’s Deep Space Network (DSN), which comprises large radio antenna complexes located near Goldstone, CA, Canberra, Australia, and Madrid, Spain.

The telescope operates on the S-band at 2.27 GHz for commands and the Ka-band at 25.9 GHz for fast science data transmission.

The Ka-band downlink enables the James Webb Space Telescope (JWST) to send data at speeds up to 28 megabits per second, transmitting at least 57.2 gigabytes of scientific data daily.

This data uses the Flexible Image Transport System (FITS) to share and analyze images of the universe.

The DSN sends FITS data to the Webb Science and Operations Center in Baltimore for processing and distribution to scientists worldwide.

Between May 22 and 24, 2022, a micrometeoroid struck the JWST’s C3 mirror, causing more damage than pre-launch predictions expected.

NASA adjusted the alignment of JWST’s primary mirror segments to reduce distortion, enabling high-quality data and imagery.

A team led by the University of Minnesota made a noteworthy announcement April 13, 2023, regarding the discovery of a small, magnified galaxy characterized by a significant rate of star formation.

The team used the Webb telescope to observe what the U of M press release called a “minuscule galaxy” magnified by the gravity of the foreground galaxy cluster RX J2129.

This observation revealed the minuscule galaxy as it existed around 500 million years after the Big Bang, which happened about 13.8 billion years ago.

In May 2024, the JWST Advanced Deep Extragalactic Survey (JADES) confirmed JADES-GS-z14-0 as the most distant galaxy discovered to date.

The galaxy is observed as it was just 290 million years after the Big Bang, its light having traveled about 13.5 billion years to reach Earth.

In March of this year, two separate teams using the ALMA telescope (Atacama Large Millimeter/Submillimeter Array) in Chile reported the detection of significant oxygen signatures originating from JADES-GS-z14-0.

The James Webb Space Telescope cost about $10 billion, far exceeding NASA’s initial estimates due to overruns and delays.

The Webb telescope is expected to be operational well into the 2040s as long as it avoids damage from micrometeoroid impacts.

More information is available at webbtelescope.org.





Friday, May 2, 2025

Looking into Signalgate

@Mark Ollig

Signal is a free, open-source app for secure messaging.

Users can chat and send encrypted messages, photos, documents, videos, voice notes, and other files.

Signal includes a “disappearing messages” feature that allows users to set a timer for automatic message deletion from seconds to weeks. Once the timer expires, messages are removed from all devices.

Most of us are aware of recent news reports of senior US officials using the Signal app to discuss sensitive military strikes, which has raised controversy and prompted investigations into security protocols and communication practices.

Leaked Signal chats exposed vital military information, including the identity of a Houthi missile expert and details about weapon systems like F-18 jets and attack drones.

Jeffrey Goldberg, editor-in-chief of the Atlantic, was inadvertently included in the chat. Later, the full transcript, which contained sensitive information about US military strikes against Houthi positions, was published.

The Signal transcript showed Defense Secretary Pete Hegseth disclosed the exact timings of warplane launches and bomb drops before the attacks on Yemen’s Houthis.

Major news outlets, including the New York Times, the Washington Post, the Atlantic, AP, CNN, Fox News, and PBS, have reported on the unauthorized disclosure of Signal chats involving senior US officials and sensitive military information.

POLITICO, an often-cited source for news on politics, reported April 2 of this year that “a dozen current and former officials confirmed” Signal is used across government agencies, even though there are “warnings about its security vulnerabilities” and “no clear oversight” of how it’s used.

The AP (Associated Press), on March 24 of this year, found Signal accounts for government officials “in nearly every state, including many legislators and their staff,” with some accounts registered to “government cellphone numbers” and others to “personal numbers.”

The AP notes that encrypted apps like Signal “often skirt open records laws,” and that “without special archiving software, the messages frequently aren’t returned under public information requests.”

The media has called this “The Signal Saga,” “Signal Scandal,” and “Signalgate.”

“Signalgate” reminds me of the Watergate Senate hearings, which were nationally televised from May to November 1973 – and yes, I do remember watching them.

Signal: a public messaging app

Signal is an open-source messaging app that offers end-to-end encryption.

It is operated by the Signal Technology Foundation, a non-profit organization founded in 2018 by Moxie Marlinspike and Brian Acton.

Signal maintains global accessibility by using cloud infrastructure from providers like Amazon Web Services (AWS), Google Compute Engine, and Microsoft Azure.

Signal offers strong end-to-end encryption, but its use of centralized public cloud servers presents security risks, particularly when dealing with sensitive government information.

The encryption itself is not compromised, but using infrastructure outside direct government control increases the risk of unauthorized access or exploitation, making Signal unacceptable for US government-classified communications.

Signal’s source code is available on GitHub at https://github.com/signalapp , and its official website is at, https://signal.org.

NPR (National Public Radio) reported March 25 of this year, “The Pentagon issued a department-wide advisory March 18, 2025, warning against using Signal even for unclassified information.”

It highlighted the dangers of using third-party messaging apps for official communications due to vulnerabilities that foreign adversaries could exploit.

The Pentagon clarified that third-party messaging apps like Signal may be used for unclassified accountability or recall exercises, but are not authorized to process or store non-public unclassified data.

SIPRNet: secure communications infrastructure:
The US government’s SIPRNet (Secret Internet Protocol Router Network) was founded in the early 1980s with the launch of Defense Secure Network 1 (DSNET 1) under the Defense Data Network (DDN) initiative.

While SIPRNet was not formally named until the 1990s, its operational roots trace back to this classified communications effort, which aimed to create a secure infrastructure for classified communications across various levels of sensitivity.

SIPRNet, obtained from DSNET 1, became operational by 1997 and serves as the Department of Defense’s classified network for secret-level information.

SIPRNet is used for secure communication between military branches, government agencies, and international partners.

It handles classified information up to the secret level and employs government-approved encryption.

SIPRNet enables real-time data sharing that is secured by strict encryption and multi-factor authentication (MFA).

It operates on a physically isolated infrastructure, separate from both NIPRNet (Non-classified Internet Protocol Router Network) and the public internet, which is the Department of Defense’s global network for unclassified data.

Its security is reinforced through host-based security systems, continuous compliance monitoring, and tools like HBSS (Host-Based Security System) and ACAS (Assured Compliance Assessment Solution).

The US Department of Defense enforces strict rules to protect data integrity, including strong password policies, separate admin accounts, and the banning of unauthorized software or hardware.

Regular audits and Cyber Command Readiness:
Cyber Command Readiness Inspection (CCRI) ensures that security measures are continuously maintained and that any weaknesses are promptly addressed.

Unlike Signal, which uses commercial infrastructure, SIPRNet uses specialized defense-in-depth systems to provide a secure environment for classified communications.

SIPRNet protects sensitive data and national security by using a physically isolated network, strong encryption, and multi-factor authentication with hardware tokens.

Why Signal is not suitable for classified communications:
Signal is suitable for personal secure messaging but does not meet US military standards for classified communications. It is not fit for sensitive government information due to its reliance on third-party cloud services and the public internet, unlike SIPRNet, which has stronger security.

Signalgate has brought to our attention the importance of secure communication protocols for safeguarding our nation’s sensitive information and, most of all, maintaining the public’s trust.

Created using Imagen-3 on Gemini Advanced AI


Friday, April 25, 2025

Minnesota’s push for statewide broadband access

@Mark Ollig

For many rural Minnesotans, accessing healthcare through telehealth is difficult, or sometimes impossible, due to slow internet speeds and a shortage of nearby clinics or doctors.

I recently read the 2024 annual report from the Minnesota Office of Broadband Development (OBD), published Jan. 15, 2025.

The report highlights the need to expand broadband internet access in underserved areas, as recommended by the Minnesota Department of Health, to enhance telehealth service availability.

In March 2024, the Federal Communications Commission (FCC) redefined broadband standards as having a minimum download speed of 100 mbps and an upload speed of 20 mbps.

The 2024 OBD report reveals that 89,000 households in Minnesota do not have access to the 100/20 mbps broadband standard.

Additionally, 143,000 households lack access to the older 25/3 mbps benchmark.

According to table three on page 17 of the report, while 99.57% of metro households meet the 100/20 mbps goal, only 91.61% of households in greater Minnesota do.

You can read the OBD report athttps://bit.ly/4imuVSj

The FCC’s Affordable Connectivity Program (ACP) ended June 1, 2024, affecting 245,000 low-income households in Minnesota.

The loss of congressional ACP funding has further limited broadband access for Minnesota’s low-income residents, seniors, rural communities, and indigenous tribal nations.

Introduced March 1, 2024, and currently under legislative review, Minnesota Senate File (SF) 2889 aims to modernize broadband development and promote digital equity throughout the state.

The SF 2889 bill stresses digital inclusion and proposes renaming the state’s broadband office to the ‘Office of Broadband Development and Digital Equity,’ dedicating this office to coordinating these efforts.

Here’s a closer look at what SF 2889 outlines.

Section one amends data privacy rules concerning internet service provider data shared with the state’s broadband office and officially renames that office as the Office of Broadband Development and Digital Equity.

Section two reinforces this by amending the office’s primary statute to reflect the new name and its expanded focus on broadband adoption and digital inclusion for underserved populations.
It also details the office’s role in statewide planning and adds requirements for enrollment data and equity recommendations in annual reports.

Section three makes a conforming amendment to section 116J.391, subdivision one, for consistency with the office’s updated name and focus.

Section four updates key Broadband Grant Program definitions, importantly setting the “underserved areas” benchmark at the modern 100 mbps download / 20 mbps upload standard and defining qualifying wireless services as “served.”

Section five amends section 116J.395 to revise the priorities of the border-to-border Broadband Grant Program.

It requires that at least 50% of its funds go to projects meeting workforce standards, thereby linking broadband expansion with job creation.

Section seven introduces a grant program for apartments and manufactured home parks, focused on improving broadband access and digital equity.

The program finances infrastructure upgrades, affordable services, and digital inclusion initiatives, targeting high-need areas.

Section eight amends the existing statute (116J.397) for broadband data collection and mapping.

The office continues its ongoing work under this statute, which has been required since 2016.

This includes independent data collection and verification, analysis for investment planning, adoption surveys, and the production of annual public service availability maps, which are due each April 15.

Section nine establishes clear statewide goals for 2028: 95% of households should have broadband, 70% of eligible households should use service discounts, and 95% should own a computer or similar device for accessing the internet.

Minnesota SF 2889 has been referred to the Senate Agriculture, Veterans, Broadband, and Rural Development Committee.

You can follow its progress and read the full text of the bill on the Minnesota Office of the Revisor of Statutes: .

The $42.45 billion federal Broadband Equity, Access, and Deployment (BEAD) program, funded by the 2021 Infrastructure Investment and Jobs Act and including funding for broadband internet to states, faces potential rollout delays nationwide.

Minnesota, which was allocated $651.8 million from BEAD, is concerned these delays will impact the deployment of broadband projects in our state.

Seeking to prevent funding holdups, our state’s broadband office formally made its concerns known to the US Commerce Department in early April of this year.

Other states have also raised concerns.

The success of broadband projects depends not only on securing funding but also on safely deploying a qualified workforce.

Minnesota Statute 326B.198 establishes the Safety-Qualified Underground Telecommunications Installer Certification Program through the Department of Labor and Industry (DLI) to enhance safety.

Installers must complete 40 hours of training, pass an exam, and take a four-hour refresher every three years.

At least two certified installers are required for horizontal directional drilling (HDD) of fiber optic cables.

The certification starts July 1 of this year in the Twin Cities and Jan. 1, 2026, for the rest of Minnesota, with DLI-approved training programs.

You can read Minnesota Statute 326B.198 at .

Many decades ago, while working at the Winsted Telephone Company, my brother and I regularly buried telephone cables beneath highways and driveways.

We used a Case Davis Fleetline 40+4 trencher, equipped with a Ditch Witch Hydro-Boring unit powered by the Case’s hydraulic system.

Ten-foot sections of one-inch internal diameter pipe were connected with clips and then pushed forward while being rotated to bore a tunnel under the driveway or highway.

Once the tunnel bore was complete, we attached a rope to the end of the pipe string and used it to pull the pipes back out, leaving the rope running through the tunnel.

Then, we secured the telephone cable to the end of that rope using a wire mesh grip (often called a cable sock).

We used the trencher to pull the rope, which drew the cable through the tunnel.

Then, we pulled enough cable to reach the nearest ground-level pedestal, where it was spliced.

Minnesota is working to provide broadband access in rural and underserved areas.

Affordable broadband internet should be available to everyone.
McLeod County - per MN Broadband data (2024)