Tweet This! :)

Friday, December 6, 2024

The mysterious miniature ‘space shuttle’

© Mark Ollig

The X-37B is an uncrewed spaceplane shrouded in secrecy with a design resembling a miniature NASA Space Shuttle.

NASA initiated the X-37 program in 1999, aiming to develop a reusable space transportation system.

This program encompassed the X-37B, a small, autonomous spacecraft designed for operation in low Earth orbit.

In 2004, the X-37B program was transferred from NASA to the US Air Force. 

Today, operational control of the X-37B resides with the US Space Force, which assumed responsibility in 2020.

This American-made X-37B is a reusable orbital spaceplane that is 29 feet in length, 15 feet in wingspan, and 9.5 feet in height.

Its weight remains officially classified, although various sources suggest it is around 11,000 pounds.

The X-37B is designed for vertical takeoffs and is launched encased within a protective fairing shell atop a rocket. 

After completing a mission, the spaceplane lands on a runway, similar to NASA’s space shuttle.

The US Air Force’s official website states that the X-37B is a reusable experimental spacecraft for conducting orbital experiments and advancing future US space technologies.

The payloads and experiments conducted using the X-37B are mainly classified.

The X-37B’s designation follows the US military’s experimental aircraft naming convention. The “X” denotes experimental, “37” is a sequential number, and “B” indicates the second iteration.

Note that not all numbers in the one through 36 sequence correspond to actual aircraft due to canceled projects and designation changes.

The Orbital Test Vehicle-1 (OTV-1), also known as USA-212, had its first flight as the Boeing X-37B when it was launched aboard an Atlas V 501 rocket from Cape Canaveral, FL.

The Atlas V 501 rocket, operated by United Launch Alliance, stood at 191.3 feet tall, had a diameter of 12.5 feet, and weighed approximately 1.3 million pounds.

The rocket’s RD-180 liquid-fueled engine produced 860,000 pounds-force of thrust.

The Atlas V 501’s Centaur upper stage was powered by an RL10A-4-2 liquid-fueled engine, producing 22,300 pounds-force of thrust.

The X-37B’s first mission, lasting 224 days, tested technologies such as guidance, navigation, and control systems; thermal protection; satellite sensors; autonomous orbital operation; and re-entry and landing capabilities.

The X-37B reportedly operated in a low earth orbit at an approximate altitude range of 150 to 500 miles above the planet.

The X-37B contributes to Space Domain Awareness (SDA), which entails monitoring space activities and objects, including satellites and debris, that could impact US operations.

SDA helps the Space Force identify potential orbital threats and ensure safe operations for US spacecraft and satellites.

The X-37B operates in low Earth orbit, gathering data on satellites, debris, and other space activities. 

Its secrecy is driven by national security, the pursuit of technological advantage, and protection against adversaries.

General information about launches, non-classified orbital details, and landings is typically made public; however, specific mission objectives, payload details, and most experiment specifics remain confidential.

The X-37B tests various new space technologies, including advanced reusable spacecraft systems, autonomous navigation and control systems, and novel propulsion technologies.

Today, the Air Force’s Rapid Capabilities Office is involved in the program’s development, while the US Space Force oversees on-orbit operations.

The launch of the X-37B OTV-6 mission was May 17, 2020, from Cape Canaveral Space Force Station, FL, aboard an Atlas V 501 rocket.

A service module was introduced on the X-37B with the OTV-6 mission, enabling the spacecraft to carry out more experiments, store additional fuel, increase the spacecraft’s orbital range and duration, and support complex maneuvers.

The module also facilitates payload deployment and retrieval, hosting experiments like the Naval Research Laboratory’s Photovoltaic Radiofrequency Antenna Module and the US Air Force Academy’s FalconSat-8.

After spending 908 days in orbit, the X-37B landed at NASA’s Kennedy Space Center Nov. 12, 2022, ending the OTV-6 mission.

The seventh mission of the X-37B, OTV-7 (Orbital Test Vehicle-7) spacecraft launched on a SpaceX Falcon Heavy rocket, designated USSF-52, from Cape Canaveral Space Force Station Dec. 28, 2023.

The choice of the Falcon Heavy rocket for this mission may have been needed due to the X-37B’s increased payload capacity to carry more fuel, conduct more experiments, or achieve higher orbits.

The US Air Force announced that the X-37B began a series of aerobraking maneuvers Oct. 10 of this year.

These maneuvers are used to modify the spacecraft’s orbit by slowing it down with Earth’s atmosphere, which saves fuel and enables extended missions.

The secretive miniature ‘space shuttle,’ the X-37B, is currently in Earth orbit. It will continue its mission before eventually de-orbiting and returning to Earth, as it has done successfully in its previous six missions.

While many details about the X-37B remain classified, it continues to intrigue and generate speculation.

The X-37B remains a classified conundrum wrapped in an enigma.

After eight months in space, a Chinese “reusable experimental spacecraft,” landed in the Gobi Desert region located in northern China Sept. 6.

While not officially confirmed to be a direct response to the X-37B secretive spaceplane program, it does seem to signal a strategic response to advancements in US spaceplane technology.




Friday, November 29, 2024

The ‘Early Bird’ still soars

© Mark Ollig


In October 1945, science fiction writer Arthur C. Clarke foretold the use of space satellites in an article titled “Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?”

He wrote it for Wireless World, an engineering and technology magazine published in the United Kingdom.

Clarke proposed a system of “extra-terrestrial relays” and used the term “space stations” to describe the concept of artificial satellites orbiting the Earth for communication purposes.

He proposed positioning three satellites 26,000 miles above Earth to provide continuous global radio coverage.

Each satellite would act like a communication relay. It would pick up signals from anywhere within its coverage area on Earth and then broadcast those signals to other locations within its hemisphere.

Clarke also addressed the need for powerful rockets to place these satellites into orbit, stating, “The development of rockets sufficiently powerful to reach orbital, and even [earth gravitational] escape velocity is now only a matter of years.”

President Kennedy signed the Communications Satellite Act Aug. 31, 1962, leading to the creation of the Communications Satellite Corporation (COMSAT), which the US Congress authorized to establish a global commercial satellite communication system.

In 1964, COMSAT played a pivotal role in the establishment of the International Telecommunications Satellite Organization (INTELSAT) to oversee communications satellites for providing telephone, television, and data transmission services on a global scale.

NASA successfully launched the INTELSAT 1 F-1 satellite, named INTELSAT 1, atop a three-stage Delta rocket from Complex 17A at Cape Kennedy, FL, April 6, 1965.

It was nicknamed “Early Bird” from the saying, “The early bird catches the worm.”

It was the first satellite launched and operated by an intergovernmental consortium called INTELSAT, founded in 1964.

Hughes Aircraft Company built INTELSAT 1 for COMSAT, which was the first commercial communications satellite in geosynchronous orbit.

At an altitude of 22,300 miles, the satellite, spinning at 152 revolutions per minute, was positioned over the equator at 28° west longitude in a synchronous equatorial orbit over the Atlantic Ocean.

Early Bird matched Earth’s orbital speed, allowing it to hover above the planet.

Ground stations adjusted their antennas for a direct line of sight to the satellite, ensuring uninterrupted data transmission between North America and Europe.

This 85-pound, cylindrical satellite (28 inches in diameter, 23 inches tall) used solar cells to power its electronics, which included two six-watt transponders operating on a 50 MHz bandwidth.

The Early Bird could handle 240 simultaneous transatlantic phone calls, telegraph and facsimile transmissions, and television broadcasts.

Early Bird transmitted television coverage of the Gemini 6 spacecraft splashdown Dec. 16, 1965, with astronauts Thomas Stafford and Walter Schirra onboard.

I first became curious about the Early Bird satellite while watching a YouTube video of heavyweight boxing champion Muhammad Ali fighting Cleveland Williams Nov. 14, 1966.

“I’d like at this time to compliment the thousands of people in the United Kingdom, who, where it is nearly four-o’clock, are jamming the theaters over there to see our telecast via the Early Bird satellite,” announced boxing commentator Don Dunphy.

The Early Bird satellite used one channel to broadcast television programs between the two continents, ushering in a new era of live international television.

The phrase ‘live via satellite’ emerged during this era of live trans-Atlantic televised broadcasts via satellite.

The success of Early Bird proved the practicality of using synchronous orbiting space satellites for commercial communications.

Early Bird ceased operation in January 1969; however, it was reactivated in July of that year when a communications satellite assigned to the Apollo 11 moon mission failed.

In August 1969, the INTELSAT 1-F1 satellite, the Early Bird, was deactivated.

In 1990, INTELSAT briefly reactivated Early Bird to commemorate the satellite’s 25th anniversary.

According to NASA, as of today, “Early Bird is currently inactive.”

An INTELSAT video of Early Bird’s April 6, 1965, launch from Cape Canaveral, FL, can be seen at https://tinyurl.com/y4uvqn2s.

In the video, look for two famous Minnesotans: Hubert Humphrey, vice president of the United States, then chairman of the National Aeronautics and Space Council, and Sen. Walter Mondale, who witnessed (via close circuit TV) the rocket launch of the Early Bird.

LIFE magazine had an article about the satellite May 7, 1965, cleverly titled, “The Early Bird Gets the Word.”

In 1965, the word was the Early Bird, which reminds me of the Minneapolis garage band The Trashman’s 1963 hit “Surfin’ Bird” with its lyrics “A-well-a don’t you know about the bird? Well, everybody knows that the bird is a word.”

Arthur C. Clarke’s 1945 article can be read at:https://bit.ly/Clarke1945.

The full 68-page October 1945 Wireless World magazine can be read at: https://bit.ly/3YYc48r.

The 59-year-old Early Bird satellite still soars approximately 22,300 miles above us in a geosynchronous orbit. Its International Designator Code is 1965-028A.

The satellite tracking website n2yo.com shows the Early Bird’s location in real-time; see it at https://bit.ly/3gLNTSB.

In 2025, INTELSAT will be donating a full-sized replica of the INTELSAT 1 satellite, aka the Early Bird, to the Smithsonian’s National Air and Space Museum in Washington, DC.



Friday, November 22, 2024

AI engines: GPUs and beyond

© Mark Ollig

Founded on April 5, 1993, NVIDIA Corp. is headquartered in Santa Clara, CA.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry.

Initially, NVIDIA focused on developing graphics processing units (GPUs) for the computer gaming market.

Supercomputing graphics cards rely on specialized electronic circuits, primarily the GPU, to perform a large number of calculations.

These circuits act as the “computing muscle,” allowing the card to handle complex graphics rendering, AI processing, and other workloads.

GPUs execute various computing programs, specifically complex calculations, and accelerate the processing of applications with high graphical processing needs and videos.

Graphics cards are used for computer gaming, self-driving cars, medical imaging analysis, and artificial intelligence (AI) natural language processing.

The growing demand for large language AI models and applications has expanded NVIDIA’s graphics card sales.

The NVIDIA H200 Tensor Core GPU datasheet states it is designed for generative AI and high-performance computing (HPC). It is equipped with specialized processing units designed to enhance performance in AI computations and matrix operations.

It is a powerful graphics card designed to enhance AI and high-performance computing (HPC) tasks.

The NVIDIA H200 Tensor Core GPU features 141 GB of High-Bandwidth Memory 3 Enhanced (HBM3e), ultra-fast memory technology enabling rapid data transfer of large AI language models and scientific computing tasks.

The H200 is the first graphics card to offer HBM3e memory, providing 1.4 times more memory bandwidth compared to the H100 and nearly double the data storage capacity.

It also has a memory bandwidth of 4.8 terabytes per second (TB/s), which can transport large amounts of data over a network.

This memory bandwidth significantly increases the computing capacity and performance, improving scientific computing to allow researchers to work more efficiently.

It is based on NVIDIA’s Hopper architecture, which enhances GPU performance and efficiency for AI and high-performance computing workloads. It is named after computer scientist Grace Hopper.

AI uses “inference” to understand new information and make decisions. To do this quickly, AI can use multiple computers in the “cloud” (computers connected over the internet).

The H200 boosts inference speed to twice the levels as compared to H100 graphics cards for large language models like Meta AI’s Llama 3.

The higher memory bandwidth ensures faster data transfer, reducing bottlenecks in complex processing tasks.

It is designed to process the increasingly large and complex data processing needs of modern AI technology.

As these tasks become more complex, GPUs need to become more powerful and efficient. Researchers are exploring several technologies to achieve this.

Next-generation memory systems, like 3D-stacked memory, which layers multiple memory cells, will enhance computing data transfer speeds.

High-Performance Computing (HPC) leverages powerful computers to solve complex challenges in fields like scientific research, weather forecasting, and cryptography.

Generative AI is a technology for creating writing, pictures, or music. It also enhances large language models, those capable of understanding and creating text content resembling human origination.

Powerful GPUs generate significant heat and require advanced cooling.

AI optimizes GPU performance by adjusting settings, improving efficiency, and extending their lifespan. Many programs use AI to fine-tune graphics cards for optimal performance and energy savings.

Quantum processing uses the principles of quantum mechanics to solve complex problems that are too difficult to address using traditional computing methods.

Neuromorphic computing, represented by its spiking neural networks, seeks to duplicate the efficiency and learning architectures inspired by the human brain.

As GPUs push the limits of classical computing, quantum computing is emerging with QPUs (quantum processing units) at its core.

QPUs use quantum mechanics to solve problems beyond the reach of even the most powerful GPUs, with the potential for breakthroughs in AI and scientific research.

Google Quantum AI Lab has developed two quantum processing units: Bristlecone, with 72 qubits, and Sycamore, with 53 qubits.

While using different technologies, QPUs and GPUs may someday collaborate in future hybrid computing systems, leveraging their strengths to drive a paradigm shift in computing.

Google’s Tensor Processing Units (TPUs) specialize in deep learning tasks, such as matrix multiplications.

Other key processors fueling AI computing include Neural Processing Units (NPUs), which accelerate neural network training and execution, and Field-Programmable Gate Arrays (FPGAs), which excel at parallel processing and are essential for customizing AI workloads.

Additional components utilized for AI processing are Application-Specific Integrated Circuits (ASICs) for tailored applications, System-on-Chip designs integrating multiple components, graphics cards leveraging GPU architecture, and Digital Signal Processors (DSPs).

These components are advancing machine learning, deep learning (a technique using layered algorithms to learn from massive amounts of data), natural language processing, and computer vision.

GPUs, TPUs, NPUs, and FPGAs are the “engines” fueling the computational and processing power of supercomputing and artificial intelligence.

They will likely be integrated with and work alongside quantum processors in future hybrid AI systems.

I still find the advancements in software and hardware technology that have unfolded throughout my lifetime incredible.
I used Meta AI's large language model program
(powered by Llama 3) and its text-to-image generator
 (AI Imagined) to create the attached image of two
 people standing in front of a futuristic "AI Quantum Supercomputer."
The image was created using my text input, and the AI created the
image with no human modifications. 






Friday, November 15, 2024

Accelerating the future: supercomputing, AI, part one

© Mark Ollig


Intel Corp. (INTC), founded in 1968, has been the major semiconductor sector representative on the Dow Jones Industrial Average (DJIA) since 1999, that is, until this month.

The DJIA, established May 26, 1896, with 12 companies, is today an index tracking the stock performance of 30 major companies traded on the US stock exchange.

In 1969, Intel partnered with Busicom, a Japanese calculator manufacturer founded in 1948, to develop a custom integrated circuit (IC) for its calculators.

This contract led to the creation of the Intel 4004, the first commercial microprocessor to be used in 1971 for Busicom’s 141-PF calculator; the 4004 combined the central processing unit (CPU), memory, and input and output controls on a single chip.

By integrating transistors, resistors, and capacitors, ICs like the Intel 4004 revolutionized the electronics industry, enabling the design of smaller, more powerful devices. Intel designs and manufactures ICs, including microprocessors, which form the core of a CPU.

Today’s Intel CPUs are supplied to computer manufacturers, who integrate them into laptops, desktops, and data centers for cloud computing and storage.

The Dow Jones recently replaced Intel with NVIDIA (NVDA); a technology company founded in 1993.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry (interestingly, their green logo hints at this, symbolizing the ‘green with envy’ competitors).

NVIDIA has outpaced Intel in the area of artificial intelligence (AI) hardware; however, Intel has strengthened its AI capabilities with their 13th Gen Core Ultra processor series and Gaudi 3.

Gaudi 3 is a data center-focused AI accelerator designed for specialized workloads such as processing AI data in large-scale computing centers and AI high-level language training.

The DJIA has clearly recognized this shift, highlighting NVIDIA’s growing influence in AI via its graphics processing units (GPU) technology.

The GPU functions as the computing muscle executing tasks, specifically complex calculations and graphic processing necessary for graphics-heavy applications.

Mostly known for designing and manufacturing high-performance GPUs, NVIDIA released its first GPU, NV1, in 1995, and the RIVA 128 in 1997, and the GeForce 256, in 1999.

GPUs are processors engineered for parallel computation, excelling in tasks that require simultaneous processing, such as rendering complex graphics in video games and editing high-resolution videos.

Their architecture also makes them especially suited for AI and machine learning applications, where their ability to rapidly process large datasets substantially reduces the time required for AI model training.

Originally focused on graphics rendering, GPUs have evolved to become essential for a wide range of applications, including creative production, edge computing, data analysis, and scientific computing.

NVIDIA’s RTX 4070 Super GPU, priced between $500 to $600, targets mainstream gamers and content creators seeking 4K resolution.

For more complex workloads, the RTX 4070 Ti Super GPU, priced at around $800, offers higher performance for computing 3D modeling, simulation, and analysis for engineers and other professionals.

Financial experts use GPU technology for analyzing data used in monetary predictions and risk assessments to make better decisions.

Elon Musk launched his new artificial intelligence company, “xAI,” March 9, 2023, with the stated mission to “understand the universe through advanced AI technology.”

The company’s website () announced the release of Grok-2, the latest version of their AI language model, featuring “state-of-the-art reasoning capabilities.”

Grok is reportedly named after a Martian word in Robert A. Heinlein’s science fiction novel “Stranger in a Strange Land,” implying a deep, intuitive understanding – at least he did not name it HAL 9000 (2001 A Space Odyssey), or the M-5 multi-tronic computing system (Star Trek).

Colossus employs NVIDIA’s Spectrum-X Ethernet networking platform, providing 32 petabits per second (Pbps) of bandwidth to handle the data flows necessary for training AI large language models.

It also uses 100,000 NVIDIA H100 GPUs (the H is dedicated to Grace Hopper, a pioneering computer scientist), with plans to expand to 200,000 GPUs, which includes the new NVIDIA H200 models.

The NVIDIA H100 GPU has 80 GB of HBM3 high-bandwidth memory and 3.35 terabytes per second (TB/s) of bandwidth.

The NVIDIA H200 GPU, released in the second quarter of 2024, has 141 GB of HBM3e (extended) memory and 4.8 TB/s bandwidth.

It provides better performance than the H100 in some AI tasks, with up to a 45% increase, has two times faster inference speeds (measures how quickly an AI processes new information and generates results) on large language models, and uses less energy than the H100.

Both the H100 and H200 GPUs are based on the Hopper architecture.

Be sure to read next week’s always-exciting Bits and Bytes for part two.


NVIDIA HGX H200 141GB 700W 8-GPU Board













Elon Musk’s ‘Colossus’ AI training system
 with 100,000 Nvidia chips
(GPU module in the foreground)


Friday, November 8, 2024

The web’s early threads

© Mark Ollig

The internet, our foundational digital network, is similar to the paved highways connecting various destinations.

The web is an application that runs on top of the internet, much like the cars that travel over our highways.

The web operates on top of the internet’s underlying infrastructure using a separate layer of software and protocols (the overlay structure and operability) to enable website interactions.

British computer scientist Tim Berners-Lee presented “Information Management: A Proposal,” March 12, 1989, a recommendation for a distributed hypertext system to his colleagues at the European Organization for Nuclear Research, or CERN, in Geneva, Switzerland.

Tim Berners-Lee did not conceive the idea of hypertext; earlier ideas and developments influenced his work.

In 1945, Vannevar Bush, an American engineer who led the US Office of Scientific Research and Development during World War II, said, “Consider a future device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.”

“As We May Think,” an essay Bush wrote and published in July 1945 in The Atlantic Monthly magazine, describes a personalized information system, an electromechanical device called “Memex” that stores and links information; at that time, the technology was not yet available to build it.

Although never constructed, Memex, a theoretical concept, influenced the development of hypertext and personal computing.

In 1960, Nelson began Project Xanadu to develop a user-friendly way for cross-referencing documents electronically with computers; however, due to technical and coordinating issues, it was never built.

In 1965, Nelson coined the terms “hypertext” and “hypermedia,” as methods for instantly interconnecting reading documents via active links. Hypertexting links words, images, and concepts to related information. For example, while reading a website about coffee, you can click a link to see a map of where beans come from or learn how espresso is made.

During the 1960s, Douglas Engelbart developed the oN-Line System (NLS), which used hypertext-like linking computer coding and introduced graphical computing elements that influenced the development of graphical user interfaces by Xerox and Apple.

“When I first began tinkering with a software program that eventually gave rise to the idea of the World Wide Web, I named it Enquire, short for “Enquire Within Upon Everything,” a musty old book of Victorian advice I noticed as a child in my parents’ house outside London,” Tim Berners-Lee wrote on the first of 226 pages in his book “Weaving the Web” from 1999, which I own.

Berners-Lee said the title, “Enquire Within Upon Everything,” was suggestive of magic, and that the book was a portal to a world of information.

“In 1980, I wrote a program, Enquire, for tracking software in the PS [Proton Synchrotron] control system. It stored snippets of information and linked related pieces. To find information, one progressed via links from one sheet to another,” Berners-Lee said.

During his March 12, 1989, presentation, Berners-Lee diagrammed a flowchart showing CERN network users distributing, accessing, and collaborating on electronic files. Electronic documents would be viewed and modified regardless of the computer model or operating system.

He proposed a generic client user-friendly interface, a “browser” software for a computer user to interact with hypertext data servers.

Berners-Lee compared hypertext to a phone directory, with links connecting information about people, places, and categories using a system where users could access a single database via a distributed file system.

He developed the protocols for linking databases and storing documents across a network.

Tim Berners-Lee and his colleague Robert Cailliau, a Belgian informatics engineer who in 1987 proposed a hypertext system for CERN, collaborated to introduce the phrase “WorldWideWeb” Nov. 12, 1990.

A basic form of the WorldWideWeb software was operating at CERN by Dec. 25, 1990.

Berners-Lee wrote the initial code for Hypertext Transfer Protocol (HTTP), Hypertext Markup Language (HTML), and the first web browser, which he called “WorldWideWeb,” as a tool for sharing information at CERN. He used a NeXTcube computer workstation.

During 1989 to 1991, he designed the World Wide Web, creating HTTP, URL, and HTML coding technologies to allow linked document sharing over the internet.

Public computer users outside CERN could use Berner-Lee’s WorldWideWeb hypertext software over the internet starting Aug. 6, 1991.

In 1993, the internet primarily connected universities, private companies, and government computers.

In late 1992, the Mosaic web browser was being developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. Marc Andreessen and Eric Bina are often credited as the lead developers and driving forces behind it.

The NCSA Mosaic, version 1.0, could be freely downloaded from the NCSA website April 22, 1993.

I remember using NCSA Mosaic on my HP OmniBook 300 laptop computer. Mosaic featured icons, bookmarks, pictures, and a friendly user interface integrating multimedia with text and graphics.

The web, an overlaying program on the internet, operates in harmony with the internet protocols created by Vint Cerf and Robert Kahn.

Tim Berners-Lee’s first hypertext website can be seen at https://bit.ly/48AtVXt.

Today, we are witnessing the World Wide Web evolving. Its threads between information, social media, business, government, and e-commerce sites are rapidly merging with artificial intelligence, so hang on, folks!

A snapshot of the 1993 NCSA Mosaic web browser in use.
(Photo Public Domain, by Charles Severance)


Friday, November 1, 2024

Remember to cast your ‘ballotta’

© Mark Ollig


The word “election” originated in the 13th century from the Anglo-French language, meaning “choice” or “selection.”

Ballotta is the Italian word for the “little ball” used in voting, from which the English word “ballot” is derived.

In the late 1500s, the people of Venice, Italy, would register their vote (in secret) by dropping a specifically marked or colored ball (ballot) into a container. The different colors or markings on the balls represented a particular vote or candidate. The balls were then counted to determine the winning choice.

Around 508 BC, Athens created a system to protect their democracy from tyrants and prevent anyone from gaining excessive power.

Citizens could write the name of someone they believed threatened the stability and well-being of the entire community on a piece of pottery called an ostrakon. If enough votes were cast, the person faced exile. This practice, known as ostracism, is where the word “ostracize” originates.

In 1856, William Boothby, then a sheriff of Adelaide in southern Australia, developed the Australian ballot method.
This method used secret paper ballots on government-issued forms printed with candidates’ names. Voters cast their ballots in an enclosed booth and placed them in a secure ballot box for hand-counting.

Boothby’s Australian secret ballot method spread to Europe and the United States, where voters in Massachusetts first used it during a US presidential election Nov. 6, 1888.

Early voting practices in our country often involved publicly declaring one’s chosen candidate aloud or writing their name on paper, often in front of others. This practice is known as non-secret voting.

Citizens using this “non-secret” ballot method were sometimes intimidated, coerced, or bribed monetarily to cast their vote for a particular candidate.

In 1888, Massachusetts and New York adopted the Australian ballot system statewide, which included government-printed ballots listing all the candidates, rather than having voters write in names or use ballots provided by political parties.

Privacy for ballot marking was ensured using compartmental booths or tables with partitions.

The Minnesota Election Law of 1891 required the use of the Australian ballot system for all general elections.

The January 1891 general statutes of the State of Minnesota says, “The election law of 1891, bringing the entire state under the so-called Australian system of voting in general elections, imposes important duties upon this office, also upon each and every town board and town clerk, all of which must be performed in proper order to secure a valid election.

Under section 44 of said law, each election district must be provided with three ballot boxes for voting, one ballot box painted white, one painted blue, and one painted black. There shall also be provided in each election precinct, two voting booths for every hundred electors registered. There shall also be provided an indelible pencil for each voting booth.”

Our use of the term “voting booth” likely originated from the name William Boothby, although this is not definitively proven. Back in the 19th century, a booth was also considered an enclosed space for animals inside a barn.

Minnesota used the Australian system of voting for the 1892 US presidential election.

A Minneapolis Times newspaper article from Nov. 15, 1892, titled, “Comment on the Australian Ballot System of counting” stated, “The Australian ballot law has its limitations, and those who’ve worked closely with it, like election judges, generally agree that while it’s effective in preventing illegal voting and ensuring ballots are cast secretly, it falls short when it comes to counting those ballots.”

Jacob Hiram Myers (1841 to 1920) obtained US Patent 415,549, titled “Voting Machine,” Nov. 19, 1889, which was the first mechanical lever voting machine.

In 1890, he founded Myers American Ballot Machine Company, and his voting machines were first used in Lockport, NY, in 1892 for a town election.

Unfortunately, Myers’ voting machines encountered significant problems during the Rochester, NY election of 1896, after which his company closed.

By the 1930s, improved models of mechanical lever voting machines were being used in many US cities; however, they were subjected to various problems, including being tampered with, and by 1982, most US production of these machines had ended.

Reading ballots using an optical mark-sense scanning system was first used in 1962 in Kern City, CA.

The Norden Division of United Aircraft and the City of Los Angeles designed and built this ballot reading method, which was also used in Oregon, Ohio, and North Carolina.

In the 1964 presidential election, voter jurisdictions in two states, California and Georgia, used punch cards and computer tabulation machines.

The 2000 presidential election is remembered for Florida’s punch card ballots and their “hanging chads” recount.

US Patent 3,793,505 was granted for “the Video Voter” Feb. 19, 1974. The abstract described it as “An electronic voting machine including a video screen containing the projected names of candidates or propositions being voted.”

The video voter was used in Illinois in 1975 and is considered the first direct-recording electronic voting machine used in an election.

Today, Minnesota ballot tabulators use optical scanner equipment to read and record the ballot vote for each candidate. Companies providing the state’s voting equipment include Dominion Voting Systems, Election Systems & Software (ES&S), and Hart InterCivic (Hart Verity).

Be sure to exercise your right to vote.

im


image depicting a New York polling place from 1900 showing voting booths on the left. The image is public domain and is from the 1912 History of the United States, volume V. Charles Scribner's Sons, New York.


Friday, October 25, 2024

My ‘additive manufacturing’ journey

© Mark Ollig


3D printing, also known as additive manufacturing, creates physical objects from digital files. 

These files can be designed with Computer-Aided Design (CAD) software or found online.

Materials like plastics and metals are used to make physical objects/models, built layer by layer with a 3D printer.

Recently, my two youngest sons gave me a Bambu Lab A1 mini 3D printer as a birthday gift. 

The second oldest son is a 3D printing enthusiast and has printed some models for me, like the NASA Viking 1 lander and the James Webb Space Telescope. 

During the COVID-19 pandemic, he was printing sturdy casings to hold the filter used with N-95 masks. 

He also printed an incredibly realistic miniature of the moon’s surface using artificial moon dust called regolith. This mini moonscape now serves as the landing spot for my model of the Apollo 11 lunar module.

The Bambu printer came out of the box and was nearly fully assembled. The printer fits nicely on the wooden stand that once held my Xerox laser printer, which I had given to my oldest son after purchasing a new HP model.

The Bambu Handy software application allows me to control the 3D printer directly from my smartphone or laptop. 

Today’s computing landscape is all about apps and cloud-based programs, a stark contrast to the floppy disk days of yesteryear.

According to Bambu Lab’s website, their A1 mini 3D printer weighs 12.2 pounds and measures 13.7 inches high, 12.4 inches wide, and 14.4 inches deep. 

The build volume, or maximum size of the object model it can print, is 7.1 by 7.1 by 7.1 inches.

I turned on the 3D printer, installed its app, connected the printer to my internet router’s Wi-Fi, and registered with Bambu Lab.

Bambu Lab’s 3D printers use custom, non-open-source computing firmware, reportedly a Linux-based operating system. Two popular open-source firmware options for 3D printers are Marlin, created in 2011, and Klipper, developed in 2016.

I loaded the Polymaker spool of 1.75 mm (0.069-inch) polylactic acid (PLA) filament onto the 3D printer’s spool holder. PLA is a type of biodegradable plastic. The spool, on which 1,082 ft of filament is rolled, weighs 2.2 lb and is made of recycled cardboard. 

Next, I threaded the Savannah Yellow-colored filament into the polytetrafluoroethylene (PTFE) guide tube, which led to the printer’s hardened steel extruder.

The Bambu Lab A1 has four stepper motors, one of which powers the extruder, which draws the filament into the nozzle within the tool head, where it is heated from 374 to 446 °F. The printer is capable of reaching temperatures up to 572 °F.

Calibration of the 3D printer involves leveling its build plate and adjusting nozzle height, filament flow, temperature, and belt tension to ensure accurate and reliable layer printing at speeds up to 19.7 inches per second. 

The dynamic flow control program ensures the 3D printer dispenses the correct amount of plastic filament.

I used the app to connect to Bambu Lab’s cloud servers, where I chose a digital model from their library. 

To evaluate the printer’s performance, I printed a 3D Benchy tugboat.

This highly detailed tugboat is a standard test for 3D printers. It helps to see how well the printer can replicate complex features like curves, small details, and inclined planes.

I trimmed the filament tip and threaded it through the tube until it reached the extruder, which feeds and controls the flow of melted plastic to build each layer of a 3D print. 

I then tapped the “Load” icon on the color touchscreen at the front of the 3D printer. 

The extruder smoothly pulled the filament through the PTFE tube and into the hotend of the tool head, where it would be melted for printing my model.

I then saw part of the yellow filament emerging from the nozzle, which meant the printer was ready.

The 3D printer began extruding the heated, melted plastic filament, following the digital file instructions to build the tugboat layer by layer on the build plate.

The app provides a live video feed of the tugboat’s construction from the camera attached to the 3D printer. 

The 3D printer performed flawlessly, producing a robust yellow tugboat model with smooth lines and distinct features like a smokestack and windows. 

I was also impressed by how quietly the printer ran from start to finish. 

As this is Halloween season, I also printed a robotic-looking skeleton.

My son proposed a fitting analogy for 3D printing: Building a brick wall involves stacking layers of bricks, while a 3D printer builds objects in layers of plastic. 

I like this printer and consider it an incredible tool for exploring the possibilities of 3D printing on a personal scale.

Forty years ago, while working for the Winsted Telephone Co., I clearly remember unrolling copper-paired cable from a heavy wooden spool mounted on a trailer hitched to the company’s yellow 1965 Ford F-100 service/utility truck. 

These days, I am threading thin plastic filament from a lightweight recycled cardboard spool attached to a 3D printer.

 Perhaps tackling a 3D-printed model of that old ‘65 Ford telephone truck will be my next project.

Thank you for the great birthday present, boys.
Finished tugboat and robotic skeleton 3D printed and placed on the build plate of
the Bambu Lab A1 model printer.
(Photo by Mark Ollig)

Bambu Lab A1 mini 3D printer building the tugboat.
(photo by Mark Ollig)





Friday, October 18, 2024

‘Air Mail’ within a tube network

© Mark Ollig


From 1889 to 1893, John Wanamaker served as US Postmaster General and strongly advocated pneumatic mail delivery.

In 1892, Congress appropriated $10,000 to Philadelphia to build a two-and-a-half-mile network of eight-inch pneumatic mail tubes beneath the city streets.

In 1893, the Philadelphia Post Office became the first US high-speed delivery mail transport system.

This system used air pressure to propel a cylindrical capsule or container (sometimes referred to as a carrier) through a network of tubes between post office substations and the main post office.

Capsules were made of gutta-percha (similar to rubber but harder and less elastic), leather, wood, durable fibers, steel, and a mostly brass shell casing.

The Philadelphia Times wrote the new pneumatic tubes were a “conspicuous success” Feb. 19, 1893.

“Postmaster General Wanamaker and Philadelphia Postmaster Field inaugurated the pneumatic tube, and after dedicating it to piety and patriotism by the Bible and the flag [included inside a container], sent mail matter through it with such speed as to obliterate time,” the article said.

The sound made by a capsule rushing through the tubes was described as “whoosh!”

The capsules varied in diameter depending on the size of the tubes, typically six to seven inches for the eight-inch tubes and around five inches for the six-inch tubes.

To return capsules, the system removed air from the tubes, creating lower pressure that pulled them back, allowing two-way travel within the same tubes.

The pneumatic tube system used electric motors, rotary blowers, and air compressors to create air pressure (3 to 8 psi) that pushed capsules through the tubes.

Although the capsules could reach up to 100 miles per hour, the turns in the tube network limited their average speed to 30 to 35 mph. Reaching their destination, the mail containers emptied onto a cloth-aproned catch.

A majority of the pneumatic tube network was located underground and within buildings.

The tubes usually connected with each other using flanging, which widened the tube ends, which were secured with bolts and a gasket to the next tube for an airtight connection.

Lead-based soldering was used for junction points where tubes branched off or changed direction.

Due to high costs and fabrication challenges, steel wasn’t commonly used for pneumatic tube construction until the early 1900s.

“Mail Matter Cut in Pneumatic Tubes” was the Philadelphia Inquirer newspaper headline March 5, 1893, describing “an accident in the service that destroyed many letters.”

The article mentioned a “serious hitch” that temporarily disrupted mail delivery between postal substations and the post office’s pneumatic tube system.

The lid of a mail carrier capsule wasn’t properly fastened. As it traveled through the tube system, it detached, spilling mail parcels that were then shredded by another speeding capsule, “cutting them to pieces,” as the newspaper put it.

The Pneumatic Transit Company operated the Philadelphia mail tube system.

The Philadelphia Inquirer published “The Pneumatic Tubes Facilitate the Handling of Post Office Business” on May 2, 1893. The article stated that the pneumatic tube between the main post office and the East Chestnut Street substation was operational, and that the new system would deliver mail much faster. It also noted previous problems, likely referring to the “serious hitch” described in the March 5 article.

New York City started using a network of pneumatic mail tubes Oct. 7, 1897, mostly made of cast iron with an inside diameter of eight and one-eighth inches buried four to six feet below the ground.

A capsule carrier pierced with holes and filled with oil would occasionally be sent through the tubes to keep them lubricated.

Due to the fast speed of the mail carriers traveling through the pneumatic tubes, the New York City postal workers operating them were nicknamed “rocketeers.”

In time, New York City was using 27 miles of tubes connecting 23 post offices.

Pneumatic mail tube systems began in Boston (1897), Brooklyn (1898), St. Louis (1904), and Chicago (1898).

The 1909 US Government Printing Office report “Investigations as to Pneumatic-Tube Service for the Mails” notes, “The contract speed of 30 miles an hour between stations is in strong contrast with the contract rate for mail-wagon service, which would range from three to five miles per hour.”

The same report states the US Congress’ post-office appropriation bill for the fiscal year ended June 30, 1909, provided “for the transmission of mail by pneumatic tubes or other similar devices, $1,000,000.”

By 1915, six US cities used pneumatic tubes: New York, Brooklyn, Boston, Philadelphia, Chicago, and St. Louis, according to the United States Postal System.

In the years that followed, more economical mail transport methods led to the decline in use of the mail tube system.

In late 1953, the US Post Office Department ended its use of tube systems for mail delivery, citing tube capacity limits due to expanded mail volume, high costs, and maintenance.

I found no record of Minnesota ever having used a pneumatic tube system for delivering the US mail.

Today, pneumatic tubes are being used in hospitals, manufacturing facilities, industrial facilities, and bank drive-throughs.

The pharmacy where I pick up my prescriptions has a drive-through pneumatic tube system.



Friday, October 11, 2024

RCA’s ‘All-Shook Up’ journey

© Mark Ollig

The Wireless Telegraph and Signal Company was established July 20, 1897, the world’s first wireless electronic communications enterprise.

It was founded to market the inventions of Italian inventor Guglielmo Marconi, who pioneered wireless telegraphy.

Headquartered in England, it was renamed Marconi Wireless Telegraph Company in March 1900.

The company’s American subsidiary, the Marconi Wireless Telegraph Company of America (later American Marconi Wireless), was established in 1899.

The US had also been pioneering wireless technology.

In 1900, Nikola Tesla was granted US Patents 645,576 and 649,621 for a wireless power transmission system that included technologies enabling wireless communication.

Tesla’s innovations laid the groundwork for many of the wireless technologies we use today.

From 1899 to 1900, the US Navy conducted experimental wireless telegraphy technology trials.

American inventor Lee de Forest developed the three-electrode Audion vacuum tube in 1906, which significantly improved radio signal amplification and detection.

The General Electric Company (GE) began the Radio Corporation of America (RCA) Oct. 17, 1919.

RCA would assume the radio rights of GE and was initially established with involvement from several companies, including Westinghouse Electric Corp., to take over the assets of American Marconi Wireless.

General Electric (GE) acquired the Marconi Wireless Telegraph Company of America for $3.5 million Nov. 20, 1919, along with the US rights to Marconi’s wireless technology.

Reportedly, the US Navy pressured Marconi to sell its American subsidiary to ensure that the transatlantic radio technology would be under US control, ultimately leading to GE’s acquisition of Marconi Wireless Telegraph Company of America.

RCA gained control of radio-related assets and patents from various companies, including American Marconi Wireless, General Electric, Westinghouse, AT&T, and the Wireless Telegraph and Telephone Company.

By 1926, vacuum tube technology had rapidly advanced, along with the growing AM radio presence in the US.

That same year, RCA established the National Broadcasting Company, pioneering the formation of national radio networks.

In 1929, RCA acquired the Victor Talking Machine Company, known for its “Victrola” phonograph record players and the iconic “His Master’s Voice” logo, with the dog Nipper listening to the speaker attached to a gramophone.

It was renamed the RCA Victor Division of the Radio Corporation of America. RCA Victor was a leading record label that signed iconic artists such as Elvis Presley.

In 1932, the US government sued General Electric in a federal antitrust lawsuit for monopolizing the radio industry. As a result, General Electric had to sell RCA to allow for more competition, which enabled RCA to grow independently.

In 1936, RCA conducted experimental television broadcasts in the New York area, using a limited number of television sets primarily for its employees.

One of the main attractions at the 1939 New York World’s Fair was RCA’s “The Magic Brain,” a large display resembling a radio with lights illuminated in sequence. The display showed how a TV signal traveled from a camera to a transmitter and a TV screen, and a narrator explained the process.

In 1940, developers at RCA supplied six CXAM radar systems to the US Navy, marking the first radar deployment on US naval vessels.

CXAM: C represents the Navy classification, X refers to the X-band frequency range, A indicates air-search, and M stands for microwave.

Four years earlier, RCA manufactured the VT-138 vacuum tube, a round electron-ray indicator tube commonly used in radios as a tuning aid with a glowing green indicator.

During WWII, miniaturized versions of these electron-ray indicator tubes were adapted for use in military proximity fuses attached to ordnance, such as bombs.

NBC’s New York station, WNBT (now WNBC), began airing regular commercial television broadcasts July 1, 1941.
Manufacturing and public sales of RCA’s CT-100, the first commercially available color TV, began March 25, 1954.

In November 1955, RCA Victor purchased Elvis Presley’s contract from Sun Records for $35,000 (about $411,000 today) and began selling what turned out to be many millions of vinyl records.

In 1968, the RCA Victor Division was renamed RCA Records and continued to release Elvis’s music on records, eight-track tapes, cassette tapes, and compact discs (CDs).

In 1986, General Electric acquired RCA Corporation for approximately $6.28 billion, gaining control of NBC’s television network holdings (then known as NBC, now NBCUniversal), along with other RCA assets.

In 1987, GE focused on core areas like broadcasting (NBC) and financial services (GE Capital), selling some RCA assets, including its consumer electronics manufacturing operations, to Thomson-Brandt, S.A., a French multimedia and electronics manufacturer.

GE retained ownership of NBC until 2011, when Comcast acquired a 51% majority stake in NBCUniversal, with GE holding a 49% stake. Two years later, Comcast obtained GE’s remaining 49% portion.

RCA was founded 105 years ago, and though the company itself may be gone, its trademark name and logo, now owned by Talisman Brands in Houston, TX, live on through licensing agreements for various consumer electronic products.

RCA Records remains an exclusive label under Sony Music Entertainment, and its history is one which seems to echo Elvis’ recording of “All Shook Up.”

My music collection contains the 1972 RCA Victor label (with Nipper) stereo LP record album of “Elvis as recorded at Madison Square Garden,” and an Elvis Presley 1973 RCA eight-track tape cartridge.