Tweet This! :)

Friday, May 30, 2025

The journey from ALOHAnet to Ethernet: A LAN is born

@Mark Ollig


In the late 1960s, Professors Norman Abramson and Franklin Kuo at the college of engineering at the University of Hawaii created ALOHAnet, an early wireless data network using radio frequencies.

By June 1971, ALOHAnet had become operational, providing inter-island wireless access to the University of Hawaii’s central mainframe computer.

ALOHAnet’s use of randomized access allowed multiple users to share the same radio channel efficiently.

It also enhanced wireless data traffic management by enabling devices to transmit data immediately and resolving signal collisions through randomized retransmissions.

In 1972, Xerox Palo Alto Research Center (PARC) in California began developing the Alto computer, explicitly as part of their vision for “the office of the future,” which included wired networked personal computers and shared resources like printers.

That same year, Robert Metcalfe joined Xerox PARC to develop a wired network that linked Alto computers and shared devices such as printers.

Also in 1972, Metcalfe proposed his doctoral thesis at Harvard, focusing on connecting the Massachusetts Institute of Technology’s (MIT) mainframe computer to the Advanced Research Projects Agency’s (ARPANET) and analyzing its performance.

His initial thesis was rejected by his Harvard dissertation committee, which stated that “it wasn’t theoretical enough.”

To address this, Metcalfe studied Norman Abramson’s 1970 paper on the ALOHAnet system and incorporated its mathematical analysis of random access protocols into his revised thesis.

After visiting Hawaii to learn firsthand about ALOHAnet’s random access protocols, he constructed mathematical models to improve the academic accuracy of his work.

Metcalfe then revised his thesis, “Packet Communication,” which was accepted, and he earned his PhD in 1973.

While at Xerox PARC May 22, 1973, he wrote an internal memo officially titled “Alto Ethernet,” sometimes informally referred to as “Ether Acquisition” in later sources. In it, he proposed a shared 50-ohm coaxial cable to connect devices like the Alto and PDP-11 in a tree-structure topology.

The PARC internal memo begins: “The ether network. We plan to build a so-called broadcast computer communication network, not unlike the ALOHA system’s radio network, but specifically for in-building minicomputer communication.”

In November 1973, Xerox PARC created the first Ethernet prototype using a 50-ohm coaxial cable.

This prototype local area network (LAN) achieved a data transmission speed of 2.94 Mbps.

In 1973, Robert Metcalfe coined the term “Ethernet,” inspired by the “luminiferous ether,” which was believed to carry light waves.

He used it to describe the shared coaxial cable that transmits data between computers, likening it to how the ether carried light to all.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is the protocol Metcalfe used in early Ethernet networks to manage access to a shared medium and detect data collisions.

Devices listen for traffic before transmitting, and if a collision occurs, the protocol uses a randomized backoff algorithm to retry transmission after a delay.

The transition from coaxial began in the late 1980s with the introduction of 10BASE-T in 1990.

This standard utilized Category 3 unshielded twisted-pair (UTP) cabling in a star topology, allowing each device to connect to a central hub or switch, which provided more flexibility and cost-effectiveness.

Vampire taps are physical connectors that attach computers and printers to Ethernet cables without interrupting the network.

They work by piercing (biting) into the coaxial cable’s insulation to connect directly to the copper conductor without cutting the main cable.

Vampire taps remind me of my days splicing telephone wires with 3M™ Scotchlok™ connectors at the local telephone company.

These connectors allowed telephone wires to be inserted with their insulation intact, speeding up the splicing process.

I mostly used the UR (red) connector, which has three ports and is used for splices joining two or three cut solid copper wires ranging from 19 to 26 AWG. It is a gel-filled connector designed to be durable and moisture-resistant for long-term reliability.

The UG (green) connector is specifically designed as a tap splice; it allows a new telephone wire to be connected to a continuous, uncut line, making it ideal for tapping into existing circuits.

For thicker wires, the UO (orange) connector, model U1O, is a gel-filled, moisture-resistant butt splice for two wires ranging from 18 to 14 AWG.

It’s been more than 30 years since I last spliced telephone wires using 3M Scotchlok UR connectors, and many of those splices are still in service.

First, I’d prepare the “joint,” (the specific point where the wire ends meet to be connected) by twisting the wires together one full turn.

Then, I would cut the wire ends evenly to about one inch and not strip the insulation, as the connector is designed for insulated wires.

Holding the UR connector with its red button facing down, I would insert the unstripped wires all the way into the individual ports: two ports for two wires, and three ports for three wires.

To complete the splice, I’d firmly crimp the red button using a Scotchlok E-9 series tool – I often called it the “Scotchlocker.” This action caused the sharp metal plate inside the UR to “bite” through the insulation and into the copper wires, creating a secure electrical connection.

The splicing procedure would be much the same for the UG, UY, and UO connectors.

In 1975, Xerox filed a patent application titled “Multipoint data communication system with collision detection” (US Patent 4,063,220, granted in 1977).

In a 2019 United States Patent and Trademark Office (USPTO) “Journeys of Innovation” interview, Robert Metcalfe credited ALOHAnet by stating, “And the key idea was to use randomized retransmissions.”

And so, ALOHAnet assisted in the birth of Ethernet’s LAN.




Friday, May 23, 2025

ALOHAnet: the dawn of the wireless computing age

@Mark Ollig


Developed at the University of Hawaii nearly 57 years ago, ALOHAnet pioneered the random access wireless protocols that enable your smart device’s Wi-Fi connection.

Cellular and satellite communications also owe a debt of gratitude to ALOHAnet.

The ALOHAnet project began in September 1968 at the University of Hawaii on the island of Oahu.

The university’s remote campuses on Maui, Hawaii Island, and Kauai faced the challenge of providing access to its central mainframe computer (IBM System/360 Model 65) located in Mānoa Valley, on Oahu.

In the late 1960s, students on these islands accessed the main campus computer through remote terminals linked by copper telephone lines.

They used devices like teletypes (TTYs), including the Model 33 and Model 35, which had keyboards and printed paper output, as well as early video display terminals such as the IBM 2260 and DEC VT05 to connect to the university’s central computer for processing.

The local telephone network, designed for analog voice communication, sometimes struggled with data transmissions from the terminal modems, which converted digital signals into analog for transmission.

The limitations of the telephone network drove the development of a radio-based system, which led to the development of ALOHAnet, as University of Hawaii professor Norman Abramson addressed in the Feb. 1, 1972, Honolulu Advertiser.

Another consideration was the expense of inter-island telephone calls, as terminal users sometimes needed seconds of computer time but were billed for three minutes.

Professors Norman Abramson and Franklin Kuo from the University of Hawaii developed ALOHAnet, a wireless data system using radio frequencies.

They introduced random access protocols that enabled multiple devices to share a single radio channel, laying the foundation for modern technologies.

ALOHAnet was developed as a proof-of-concept network to connect Oahu with other campuses in the Hawaiian Islands via wireless, radio frequency channels.

Both held doctorate degrees.

Abramson earned a bachelor’s and master’s in physics before obtaining his doctorate in electrical engineering from Stanford University.

Kuo received his BS, MS, and PhD degrees in electrical engineering from the University of Illinois Urbana-Champaign.

ALOHAnet transmitted data to Hawaiian schools through a radio channel linked to an IBM System/360 Model 65 mainframe.

An HP 2115A minicomputer, called the “Menehune,” acted as the central communication processor and network gateway.

According to Franklin Kuo’s 1981 system diagram, the Menehune managed data traffic across two 100 kHz (kilohertz) UHF (ultra-high-frequency) radio channels, 407.350 MHz (megahertz) and 413.475 MHz.

ALOHAnet’s design prioritized simplicity with direct packet bursts on fixed radio frequencies (channels).

In 1969, the team chose a cost-effective fixed-frequency approach over spread spectrum technology, which required complex and expensive hardware.

While spread spectrum would later become vital for military and commercial wireless, ALOHAnet’s fixed-frequency method made packet radio practical for academic use at the time.

The Menehune also functioned as a multiplexor/concentrator, similar to the interface message processors (IMPs) of ARPANET (Advanced Research Projects Agency Network), a pioneering packet-switching network and internet predecessor.

University of Hawaii engineering students gained hands-on experience by developing components for ALOHAnet, including its “communications modules,” or terminal control units (TCUs).

They connected their teletypes and video display terminals to these TCU modules.

These TCUs managed the wireless transmission of terminal data by applying ALOHAnet’s radio access protocols.

Dr. Abramson’s 1970 paper, “The ALOHA System,” described ALOHAnet as the first packet-radio network.
It detailed a 24,000 baud channel with 704-bit packets lasting 29 milliseconds and explained how terminal control units (TCUs) formatted these packets and managed re-transmissions.

ALOHAnet originally operated at 24,000 bits per second (bps), but speeds often dropped to 9,600 bps.

By June 1971, ALOHAnet was up and running with its random access protocol. Students on the islands were sending data packets via radio from their teletypes and CRT terminals.

If a data packet collided, the terminal would wait a random amount of time before re-transmitting until the data was successfully sent.

The Feb. 1, 1972, Honolulu Advertiser article explained that ALOHAnet could support more than 500 users on a single radio channel to the central computer.

ALOHAnet thus confirmed that multiple users could efficiently share a radio channel via simple, decentralized rules, eliminating the need for complex centralized management.

In December 1972, ALOHAnet became the first satellite node of ARPANET, linking the University of Hawaii to NASA’s Ames Research Center in California via a 50 kbps channel through the INTELSAT IV F-4 satellite.

This satellite connection, funded by ARPA, enabled ALOHAnet users to access ARPANET resources.

Today, the INTELSAT IV F-4 satellite is no longer operational, but it remains in orbit around Earth.

In the fall of 1976, ALOHAnet ceased operations when funding from the US government sponsors, primarily ARPA and NASA, ended.

The Institute of Electrical and Electronics Engineers (IEEE) hosted a virtually attended (due to COVID-19) event Oct. 13, 2020, from the University of Hawaii at Mānoa to commemorate ALOHAnet’s contributions to wireless communication.

Key speakers included IEEE leaders, Dr. Norman Abramson and Dr. Franklin Kuo.

Vint Cerf, acknowledged as “the father of the internet,” also spoke, recognizing ALOHAnet’s pioneering advancements.

Dr. Norman Manuel Abramson died Dec. 1, 2020, in San Francisco at the age of 88.

Dr. Franklin Kuo became a professor emeritus at the University of Hawaii in 2021 and is reportedly still with us at 91.

In June 2021, an IEEE commemorative plaque recognizing ALOHAnet was installed at the University of Hawaii’s Holmes Hall.

For more information on ALOHAnet, visit https://bit.ly/44LQ85a.

Aloha.



Friday, May 16, 2025

AI data network: bits to terabits

@Mark Ollig


In March, AT&T achieved a data transmission speed of 1.6 terabits per second (tbps) on a single fiber-optic wavelength.

At that speed, one could transfer a mind-boggling 200 gigabytes of data per second.

Data was transmitted at 1.6 tbps over 184 miles of AT&T’s fiber network from Newark, NJ, to Philadelphia, PA, and was managed using Ciena Corp.’s WaveLogic 6 Extreme optics and DriveNets’ software-defined networking.

This 1.6 tbps data ran parallel to live customer traffic on existing 100-gigabit (gbps) and 400-gigabit systems, proving terabit speeds can coexist with current network traffic.

In November 2024, Verizon also transported 1.6 tbps of data over its 73.3-mile metro fiber network route using Ciena’s WaveLogic 6 Extreme technology and nine reconfigurable optical add-drop multiplexers.

Traveling back to the 1870s, French telegraph engineer Émile Baudot invented a multiplexed printing telegraph system, allowing simultaneous multi-message transmission on one telegraph line.

He developed the five-bit Baudot code for alphanumeric characters, where each of 32 unique combinations was represented by five equal-duration ‘on’ or ‘off’ signals.

This fixed-length method made transmissions faster, more reliable, and standardized compared to the Morse code, which used varying lengths of dots and dashes.

Patented in 1874, Baudot’s telegraph system significantly improved telegraphic communication.

The modem (modulator-demodulator), developed in 1949 at the Air Force Cambridge Research Center, converts digital data into sounds and vice versa over regular telephone lines.

The US military first used modems to transmit radar signals.

During the 1950s and 1960s, modems connected computers and teletypewriter (TTY) terminals to remote mainframe computers over telephone lines.

Bell Labs’ 1958 Bell 101 SAGE modem, used in US military air defense, operated at 110 bits per second (bps) or in this instance (baud) to enable data communication over phone lines.

It converted digital data to analog audio signals using frequency-shift keying (FSK).

This technique encoded binary “bits” via distinct audio tones; a specific frequency (in hertz) indicated a “1” bit, another a “0.”

Since each FSK tone (symbol) in the Bell 101 modem represented one bit, its symbol rate of 110 baud equaled its bit rate of 110 bps.

The unit of symbol rate was named “baud,” after Émile Baudot’s contributions.

In 1962, Bell Labs introduced the Bell 103 modem, which replaced the Bell 101.

The Bell 103 operated at 300 bps, nearly three times faster than the Bell 101. It used FSK encoding, with each symbol representing one bit, so its symbol rate of 300 baud matched its bit rate of 300 bps.

The Bell 103 improved data transmission efficiency and speed over analog phone networks, and it was widely used by corporations, government agencies, universities, and early remote computing service providers.

During the 1960s and 1970s, the Winsted Telephone Company, where I worked, installed Bell 103 and other modems for local businesses over dedicated telephone lines.

The International Telecommunication Union’s Telecommunication Standardization Sector (ITU-T) modem speed over the years included:

  • 1980: The Bell 212A and ITU-T V.22 standards supported 1,200 bps full-duplex (simultaneous two-way data transmission).
  • 1984: The ITU-T V.22bis standard supported 2,400 bps full-duplex.
  • 1988: The ITU-T V.32 standard supported up to 9,600 bps, with fallback to 4,800 bps.
  • 1991: The ITU-T V.32bis standard supported speeds from 4,800 to 14,400 bps.
  • 1994: The ITU-T V.34 standard supported up to 28,800 bps (28.8 kbps).
  • 1996: The V.34+ update (also known as V.34 Annex 12) supported up to 33.6 kbps.
  • 1998: The ITU-T V.90 standard supported download speeds up to 56 kbps and upload speeds up to 33.6 kbps.

Dial-up bulletin board services (BBSs) thrived as data speeds improved, allowing users to enjoy being “online.”

BBSs enabled message exchange, gaming, file downloads, email, news reading, content sharing, tech skill learning, and community interaction.

Some will remember commercial BBS services like CompuServe, Prodigy, and AOL, along with hobby BBSs like my own, “WBBS Online.”

You’ll also likely recall the loud modem screeches during dial-up connections and shouts of, “Hey! Hang up. I’m online,” when someone picked up an extension phone.

Many people now have access to broadband, which the Federal Communications Commission defines as 100 mbps download and 20 mbps upload. Urban areas often provide faster gigabit “Gig” (gbps) internet via fiber.

Telecom and internet providers are upgrading their backbone, metro, and data center networks to achieve terabit speeds, using optical networking transport solutions from companies like Cisco, Ciena, Nokia, Juniper Networks, Ericsson, Infinera, and Corning.

From March 3 to 6 of this year, at the Mobile World Congress in Barcelona, Jio Platforms, AMD, Cisco, and Nokia discussed the “Open Telecom AI Platform,” which uses artificial intelligence (AI) to enhance telecommunication carriers’ optical network operations.

The telecommunications industry’s migration from legacy circuit-switched digital platforms to IP-based software-defined networking (SDN) enhances network management flexibility, scalability, and efficiency.

These solutions include adopting cloud-native session border controllers and virtualized network processes in modern cloud and SDN architectures.

Before I retired from the telecom industry, I saw AI’s initial adoption across optical networks, cloud servers, and software-defined switching platforms.

I worked with the GTE/Leich electromechanical relay central office telephone switch, the Nortel Digital Multiplex System (DMS) 10/100/250/500 circuit-switched telephone exchange switch, the Siemens DCO (digital central office) electronic telephone switch, and the Metaswitch, my final voice switching platform.

The Metaswitch is a provisioning softswitch that enables Voice over IP (VoIP) services for residential and business customers.

It is a software-based replacement for the legacy telephone switches I was decommissioning before my retirement.

Telecommunications companies are using AI and machine learning (ML) to improve network efficiency and reliability for data-intensive services working with the networking suppliers I previously mentioned.

AI enhanced telecom networks can autonomously forecast data traffic, detect faults, perform predictive maintenance, and identify real-time anomalies indicating errors, fraud, or security breaches.

AI accelerates dynamic data rerouting for telecommunication voice traffic to prevent congestion and equipment software issues, enhances data security, supports ongoing network optimization, and promotes self-optimizing networks (SON).

Many feel AI may ultimately lead to fully autonomous network operations; I am one of them.

Along with AI, we are witnessing lightning-fast data speeds.

As the 1.6 tbps transfer rate of 200 gigabytes of data would take just one second, using the 63-year-old 300-bps Bell 103 modem would be a surprising 169 years.

The data transfer rate and type of AI being used 63 years from now would undoubtedly seem magical to those of us living today.
(ChatGPT generated image based on my text input)



Friday, May 9, 2025

Webb looks into the universe

The Space Telescope Science Institute hosted a workshop in Baltimore, MD, in mid-September 1989 to discuss the successor to the Hubble Space Telescope.

The workshop involved 130 astronomers and engineers, with NASA participating to define the requirements for a new telescope originally called the Next Generation Space Telescope.

In 2002, NASA administrator Sean O’Keefe renamed the telescope the James Webb Space Telescope (JWST) to honor Apollo-era NASA administrator James Webb.

In 2004, NASA led the JWST (also called the Webb or the Webb Telescope) project, supported by European and Canadian space agencies.

The Webb telescope has a 21-foot four-inch primary mirror, the largest sent to space.

Made of lightweight beryllium O-30 powder, it consists of 18 hexagonal segments that maintain stability at very low temperatures.

Each segment (one of the 18 individual hexagonal mirrors that fit together to form the primary surface) features seven actuators (motors) for precise shape adjustments.

All 18 mirror segments have a thin gold coating (only about 48 grams in total) to reflect infrared light while minimizing weight.

The two outermost sunshield layers facing the sun are additionally coated with doped silicon for optimal heat reflection.

The sunshield design protects the telescope from extreme temperatures, ranging from 230 degrees Fahrenheit on the hot side to minus 394 degrees Fahrenheit on the cold side, ensuring its sensitive infrared instruments remain operational.

NASA Goddard managed the project, with Northrop Grumman of Redondo Beach, CA, as the prime contractor.

Ball Aerospace, in Broomfield, CO, developed the optical system and mirrors for the JWST.

Three Minnesota companies contributed to the Webb telescope: Multek-Sheldahl Brand Material of Northfield, Minco Products Inc. of Minneapolis, and ION Corp. of Eden Prairie.

The Webb is both a telescope and a spacecraft.

Equipped with solar panels, antennas, propulsion thrusters, thermal control, navigation systems, and data handling capabilities, the JWST efficiently operates in space.

Its propulsion thrusters use hydrazine fuel and dinitrogen tetroxide oxidizer for orbit and attitude adjustments.

NASA designated the Space Telescope Science Institute in Baltimore as Webb’s Mission Control and Science and Operations Center.

The James Webb Space Telescope was launched Dec. 25, 2021, at 7:20 a.m. ET from the Guiana Space Centre in French Guiana, located on the north Atlantic coast of South America.

It traveled aboard an Ariane 5 rocket, which stands 171 feet tall and generated 2.9 million pounds of thrust.

At about 3.5 minutes after launch, the fairing enclosing the seven-ton JWST opened at an altitude of roughly 68 miles above Earth.

Twenty-seven minutes after liftoff, at an altitude near 870 miles, Webb separated from the upper stage; shortly thereafter, it deployed its solar panel and established communication with mission control.

The JWST began its journey, arriving at the Sun-Earth Lagrange point (L2) Jan. 24, 2022, after deploying its sunshield and mirrors along the way.

It settled into a halo orbit around the second Sun-Earth Lagrange point (L2), approximately 930,000 miles from Earth.

This specific orbit allows Webb to balance the gravitational forces from the sun and Earth, providing an ideal vantage point for viewing the universe.

The Webb telescope officially began its mission to explore the universe July 12, 2022.

The JWST communicates with Mission Control through NASA’s Deep Space Network (DSN), which comprises large radio antenna complexes located near Goldstone, CA, Canberra, Australia, and Madrid, Spain.

The telescope operates on the S-band at 2.27 GHz for commands and the Ka-band at 25.9 GHz for fast science data transmission.

The Ka-band downlink enables the James Webb Space Telescope (JWST) to send data at speeds up to 28 megabits per second, transmitting at least 57.2 gigabytes of scientific data daily.

This data uses the Flexible Image Transport System (FITS) to share and analyze images of the universe.

The DSN sends FITS data to the Webb Science and Operations Center in Baltimore for processing and distribution to scientists worldwide.

Between May 22 and 24, 2022, a micrometeoroid struck the JWST’s C3 mirror, causing more damage than pre-launch predictions expected.

NASA adjusted the alignment of JWST’s primary mirror segments to reduce distortion, enabling high-quality data and imagery.

A team led by the University of Minnesota made a noteworthy announcement April 13, 2023, regarding the discovery of a small, magnified galaxy characterized by a significant rate of star formation.

The team used the Webb telescope to observe what the U of M press release called a “minuscule galaxy” magnified by the gravity of the foreground galaxy cluster RX J2129.

This observation revealed the minuscule galaxy as it existed around 500 million years after the Big Bang, which happened about 13.8 billion years ago.

In May 2024, the JWST Advanced Deep Extragalactic Survey (JADES) confirmed JADES-GS-z14-0 as the most distant galaxy discovered to date.

The galaxy is observed as it was just 290 million years after the Big Bang, its light having traveled about 13.5 billion years to reach Earth.

In March of this year, two separate teams using the ALMA telescope (Atacama Large Millimeter/Submillimeter Array) in Chile reported the detection of significant oxygen signatures originating from JADES-GS-z14-0.

The James Webb Space Telescope cost about $10 billion, far exceeding NASA’s initial estimates due to overruns and delays.

The Webb telescope is expected to be operational well into the 2040s as long as it avoids damage from micrometeoroid impacts.

More information is available at webbtelescope.org.





Friday, May 2, 2025

Looking into Signalgate

@Mark Ollig

Signal is a free, open-source app for secure messaging.

Users can chat and send encrypted messages, photos, documents, videos, voice notes, and other files.

Signal includes a “disappearing messages” feature that allows users to set a timer for automatic message deletion from seconds to weeks. Once the timer expires, messages are removed from all devices.

Most of us are aware of recent news reports of senior US officials using the Signal app to discuss sensitive military strikes, which has raised controversy and prompted investigations into security protocols and communication practices.

Leaked Signal chats exposed vital military information, including the identity of a Houthi missile expert and details about weapon systems like F-18 jets and attack drones.

Jeffrey Goldberg, editor-in-chief of the Atlantic, was inadvertently included in the chat. Later, the full transcript, which contained sensitive information about US military strikes against Houthi positions, was published.

The Signal transcript showed Defense Secretary Pete Hegseth disclosed the exact timings of warplane launches and bomb drops before the attacks on Yemen’s Houthis.

Major news outlets, including the New York Times, the Washington Post, the Atlantic, AP, CNN, Fox News, and PBS, have reported on the unauthorized disclosure of Signal chats involving senior US officials and sensitive military information.

POLITICO, an often-cited source for news on politics, reported April 2 of this year that “a dozen current and former officials confirmed” Signal is used across government agencies, even though there are “warnings about its security vulnerabilities” and “no clear oversight” of how it’s used.

The AP (Associated Press), on March 24 of this year, found Signal accounts for government officials “in nearly every state, including many legislators and their staff,” with some accounts registered to “government cellphone numbers” and others to “personal numbers.”

The AP notes that encrypted apps like Signal “often skirt open records laws,” and that “without special archiving software, the messages frequently aren’t returned under public information requests.”

The media has called this “The Signal Saga,” “Signal Scandal,” and “Signalgate.”

“Signalgate” reminds me of the Watergate Senate hearings, which were nationally televised from May to November 1973 – and yes, I do remember watching them.

Signal: a public messaging app

Signal is an open-source messaging app that offers end-to-end encryption.

It is operated by the Signal Technology Foundation, a non-profit organization founded in 2018 by Moxie Marlinspike and Brian Acton.

Signal maintains global accessibility by using cloud infrastructure from providers like Amazon Web Services (AWS), Google Compute Engine, and Microsoft Azure.

Signal offers strong end-to-end encryption, but its use of centralized public cloud servers presents security risks, particularly when dealing with sensitive government information.

The encryption itself is not compromised, but using infrastructure outside direct government control increases the risk of unauthorized access or exploitation, making Signal unacceptable for US government-classified communications.

Signal’s source code is available on GitHub at https://github.com/signalapp , and its official website is at, https://signal.org.

NPR (National Public Radio) reported March 25 of this year, “The Pentagon issued a department-wide advisory March 18, 2025, warning against using Signal even for unclassified information.”

It highlighted the dangers of using third-party messaging apps for official communications due to vulnerabilities that foreign adversaries could exploit.

The Pentagon clarified that third-party messaging apps like Signal may be used for unclassified accountability or recall exercises, but are not authorized to process or store non-public unclassified data.

SIPRNet: secure communications infrastructure:
The US government’s SIPRNet (Secret Internet Protocol Router Network) was founded in the early 1980s with the launch of Defense Secure Network 1 (DSNET 1) under the Defense Data Network (DDN) initiative.

While SIPRNet was not formally named until the 1990s, its operational roots trace back to this classified communications effort, which aimed to create a secure infrastructure for classified communications across various levels of sensitivity.

SIPRNet, obtained from DSNET 1, became operational by 1997 and serves as the Department of Defense’s classified network for secret-level information.

SIPRNet is used for secure communication between military branches, government agencies, and international partners.

It handles classified information up to the secret level and employs government-approved encryption.

SIPRNet enables real-time data sharing that is secured by strict encryption and multi-factor authentication (MFA).

It operates on a physically isolated infrastructure, separate from both NIPRNet (Non-classified Internet Protocol Router Network) and the public internet, which is the Department of Defense’s global network for unclassified data.

Its security is reinforced through host-based security systems, continuous compliance monitoring, and tools like HBSS (Host-Based Security System) and ACAS (Assured Compliance Assessment Solution).

The US Department of Defense enforces strict rules to protect data integrity, including strong password policies, separate admin accounts, and the banning of unauthorized software or hardware.

Regular audits and Cyber Command Readiness:
Cyber Command Readiness Inspection (CCRI) ensures that security measures are continuously maintained and that any weaknesses are promptly addressed.

Unlike Signal, which uses commercial infrastructure, SIPRNet uses specialized defense-in-depth systems to provide a secure environment for classified communications.

SIPRNet protects sensitive data and national security by using a physically isolated network, strong encryption, and multi-factor authentication with hardware tokens.

Why Signal is not suitable for classified communications:
Signal is suitable for personal secure messaging but does not meet US military standards for classified communications. It is not fit for sensitive government information due to its reliance on third-party cloud services and the public internet, unlike SIPRNet, which has stronger security.

Signalgate has brought to our attention the importance of secure communication protocols for safeguarding our nation’s sensitive information and, most of all, maintaining the public’s trust.

Created using Imagen-3 on Gemini Advanced AI