Tweet This! :)

Friday, November 29, 2024

The ‘Early Bird’ still soars

© Mark Ollig


In October 1945, science fiction writer Arthur C. Clarke foretold the use of space satellites in an article titled “Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?”

He wrote it for Wireless World, an engineering and technology magazine published in the United Kingdom.

Clarke proposed a system of “extra-terrestrial relays” and used the term “space stations” to describe the concept of artificial satellites orbiting the Earth for communication purposes.

He proposed positioning three satellites 26,000 miles above Earth to provide continuous global radio coverage.

Each satellite would act like a communication relay. It would pick up signals from anywhere within its coverage area on Earth and then broadcast those signals to other locations within its hemisphere.

Clarke also addressed the need for powerful rockets to place these satellites into orbit, stating, “The development of rockets sufficiently powerful to reach orbital, and even [earth gravitational] escape velocity is now only a matter of years.”

President Kennedy signed the Communications Satellite Act Aug. 31, 1962, leading to the creation of the Communications Satellite Corporation (COMSAT), which the US Congress authorized to establish a global commercial satellite communication system.

In 1964, COMSAT played a pivotal role in the establishment of the International Telecommunications Satellite Organization (INTELSAT) to oversee communications satellites for providing telephone, television, and data transmission services on a global scale.

NASA successfully launched the INTELSAT 1 F-1 satellite, named INTELSAT 1, atop a three-stage Delta rocket from Complex 17A at Cape Kennedy, FL, April 6, 1965.

It was nicknamed “Early Bird” from the saying, “The early bird catches the worm.”

It was the first satellite launched and operated by an intergovernmental consortium called INTELSAT, founded in 1964.

Hughes Aircraft Company built INTELSAT 1 for COMSAT, which was the first commercial communications satellite in geosynchronous orbit.

At an altitude of 22,300 miles, the satellite, spinning at 152 revolutions per minute, was positioned over the equator at 28° west longitude in a synchronous equatorial orbit over the Atlantic Ocean.

Early Bird matched Earth’s orbital speed, allowing it to hover above the planet.

Ground stations adjusted their antennas for a direct line of sight to the satellite, ensuring uninterrupted data transmission between North America and Europe.

This 85-pound, cylindrical satellite (28 inches in diameter, 23 inches tall) used solar cells to power its electronics, which included two six-watt transponders operating on a 50 MHz bandwidth.

The Early Bird could handle 240 simultaneous transatlantic phone calls, telegraph and facsimile transmissions, and television broadcasts.

Early Bird transmitted television coverage of the Gemini 6 spacecraft splashdown Dec. 16, 1965, with astronauts Thomas Stafford and Walter Schirra onboard.

I first became curious about the Early Bird satellite while watching a YouTube video of heavyweight boxing champion Muhammad Ali fighting Cleveland Williams Nov. 14, 1966.

“I’d like at this time to compliment the thousands of people in the United Kingdom, who, where it is nearly four-o’clock, are jamming the theaters over there to see our telecast via the Early Bird satellite,” announced boxing commentator Don Dunphy.

The Early Bird satellite used one channel to broadcast television programs between the two continents, ushering in a new era of live international television.

The phrase ‘live via satellite’ emerged during this era of live trans-Atlantic televised broadcasts via satellite.

The success of Early Bird proved the practicality of using synchronous orbiting space satellites for commercial communications.

Early Bird ceased operation in January 1969; however, it was reactivated in July of that year when a communications satellite assigned to the Apollo 11 moon mission failed.

In August 1969, the INTELSAT 1-F1 satellite, the Early Bird, was deactivated.

In 1990, INTELSAT briefly reactivated Early Bird to commemorate the satellite’s 25th anniversary.

According to NASA, as of today, “Early Bird is currently inactive.”

An INTELSAT video of Early Bird’s April 6, 1965, launch from Cape Canaveral, FL, can be seen at https://tinyurl.com/y4uvqn2s.

In the video, look for two famous Minnesotans: Hubert Humphrey, vice president of the United States, then chairman of the National Aeronautics and Space Council, and Sen. Walter Mondale, who witnessed (via close circuit TV) the rocket launch of the Early Bird.

LIFE magazine had an article about the satellite May 7, 1965, cleverly titled, “The Early Bird Gets the Word.”

In 1965, the word was the Early Bird, which reminds me of the Minneapolis garage band The Trashman’s 1963 hit “Surfin’ Bird” with its lyrics “A-well-a don’t you know about the bird? Well, everybody knows that the bird is a word.”

Arthur C. Clarke’s 1945 article can be read at:https://bit.ly/Clarke1945.

The full 68-page October 1945 Wireless World magazine can be read at: https://bit.ly/3YYc48r.

The 59-year-old Early Bird satellite still soars approximately 22,300 miles above us in a geosynchronous orbit. Its International Designator Code is 1965-028A.

The satellite tracking website n2yo.com shows the Early Bird’s location in real-time; see it at https://bit.ly/3gLNTSB.

In 2025, INTELSAT will be donating a full-sized replica of the INTELSAT 1 satellite, aka the Early Bird, to the Smithsonian’s National Air and Space Museum in Washington, DC.



Friday, November 22, 2024

AI engines: GPUs and beyond

© Mark Ollig

Founded on April 5, 1993, NVIDIA Corp. is headquartered in Santa Clara, CA.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry.

Initially, NVIDIA focused on developing graphics processing units (GPUs) for the computer gaming market.

Supercomputing graphics cards rely on specialized electronic circuits, primarily the GPU, to perform a large number of calculations.

These circuits act as the “computing muscle,” allowing the card to handle complex graphics rendering, AI processing, and other workloads.

GPUs execute various computing programs, specifically complex calculations, and accelerate the processing of applications with high graphical processing needs and videos.

Graphics cards are used for computer gaming, self-driving cars, medical imaging analysis, and artificial intelligence (AI) natural language processing.

The growing demand for large language AI models and applications has expanded NVIDIA’s graphics card sales.

The NVIDIA H200 Tensor Core GPU datasheet states it is designed for generative AI and high-performance computing (HPC). It is equipped with specialized processing units designed to enhance performance in AI computations and matrix operations.

It is a powerful graphics card designed to enhance AI and high-performance computing (HPC) tasks.

The NVIDIA H200 Tensor Core GPU features 141 GB of High-Bandwidth Memory 3 Enhanced (HBM3e), ultra-fast memory technology enabling rapid data transfer of large AI language models and scientific computing tasks.

The H200 is the first graphics card to offer HBM3e memory, providing 1.4 times more memory bandwidth compared to the H100 and nearly double the data storage capacity.

It also has a memory bandwidth of 4.8 terabytes per second (TB/s), which can transport large amounts of data over a network.

This memory bandwidth significantly increases the computing capacity and performance, improving scientific computing to allow researchers to work more efficiently.

It is based on NVIDIA’s Hopper architecture, which enhances GPU performance and efficiency for AI and high-performance computing workloads. It is named after computer scientist Grace Hopper.

AI uses “inference” to understand new information and make decisions. To do this quickly, AI can use multiple computers in the “cloud” (computers connected over the internet).

The H200 boosts inference speed to twice the levels as compared to H100 graphics cards for large language models like Meta AI’s Llama 3.

The higher memory bandwidth ensures faster data transfer, reducing bottlenecks in complex processing tasks.

It is designed to process the increasingly large and complex data processing needs of modern AI technology.

As these tasks become more complex, GPUs need to become more powerful and efficient. Researchers are exploring several technologies to achieve this.

Next-generation memory systems, like 3D-stacked memory, which layers multiple memory cells, will enhance computing data transfer speeds.

High-Performance Computing (HPC) leverages powerful computers to solve complex challenges in fields like scientific research, weather forecasting, and cryptography.

Generative AI is a technology for creating writing, pictures, or music. It also enhances large language models, those capable of understanding and creating text content resembling human origination.

Powerful GPUs generate significant heat and require advanced cooling.

AI optimizes GPU performance by adjusting settings, improving efficiency, and extending their lifespan. Many programs use AI to fine-tune graphics cards for optimal performance and energy savings.

Quantum processing uses the principles of quantum mechanics to solve complex problems that are too difficult to address using traditional computing methods.

Neuromorphic computing, represented by its spiking neural networks, seeks to duplicate the efficiency and learning architectures inspired by the human brain.

As GPUs push the limits of classical computing, quantum computing is emerging with QPUs (quantum processing units) at its core.

QPUs use quantum mechanics to solve problems beyond the reach of even the most powerful GPUs, with the potential for breakthroughs in AI and scientific research.

Google Quantum AI Lab has developed two quantum processing units: Bristlecone, with 72 qubits, and Sycamore, with 53 qubits.

While using different technologies, QPUs and GPUs may someday collaborate in future hybrid computing systems, leveraging their strengths to drive a paradigm shift in computing.

Google’s Tensor Processing Units (TPUs) specialize in deep learning tasks, such as matrix multiplications.

Other key processors fueling AI computing include Neural Processing Units (NPUs), which accelerate neural network training and execution, and Field-Programmable Gate Arrays (FPGAs), which excel at parallel processing and are essential for customizing AI workloads.

Additional components utilized for AI processing are Application-Specific Integrated Circuits (ASICs) for tailored applications, System-on-Chip designs integrating multiple components, graphics cards leveraging GPU architecture, and Digital Signal Processors (DSPs).

These components are advancing machine learning, deep learning (a technique using layered algorithms to learn from massive amounts of data), natural language processing, and computer vision.

GPUs, TPUs, NPUs, and FPGAs are the “engines” fueling the computational and processing power of supercomputing and artificial intelligence.

They will likely be integrated with and work alongside quantum processors in future hybrid AI systems.

I still find the advancements in software and hardware technology that have unfolded throughout my lifetime incredible.
I used Meta AI's large language model program
(powered by Llama 3) and its text-to-image generator
 (AI Imagined) to create the attached image of two
 people standing in front of a futuristic "AI Quantum Supercomputer."
The image was created using my text input, and the AI created the
image with no human modifications. 






Friday, November 15, 2024

Accelerating the future: supercomputing, AI, part one

© Mark Ollig


Intel Corp. (INTC), founded in 1968, has been the major semiconductor sector representative on the Dow Jones Industrial Average (DJIA) since 1999, that is, until this month.

The DJIA, established May 26, 1896, with 12 companies, is today an index tracking the stock performance of 30 major companies traded on the US stock exchange.

In 1969, Intel partnered with Busicom, a Japanese calculator manufacturer founded in 1948, to develop a custom integrated circuit (IC) for its calculators.

This contract led to the creation of the Intel 4004, the first commercial microprocessor to be used in 1971 for Busicom’s 141-PF calculator; the 4004 combined the central processing unit (CPU), memory, and input and output controls on a single chip.

By integrating transistors, resistors, and capacitors, ICs like the Intel 4004 revolutionized the electronics industry, enabling the design of smaller, more powerful devices. Intel designs and manufactures ICs, including microprocessors, which form the core of a CPU.

Today’s Intel CPUs are supplied to computer manufacturers, who integrate them into laptops, desktops, and data centers for cloud computing and storage.

The Dow Jones recently replaced Intel with NVIDIA (NVDA); a technology company founded in 1993.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry (interestingly, their green logo hints at this, symbolizing the ‘green with envy’ competitors).

NVIDIA has outpaced Intel in the area of artificial intelligence (AI) hardware; however, Intel has strengthened its AI capabilities with their 13th Gen Core Ultra processor series and Gaudi 3.

Gaudi 3 is a data center-focused AI accelerator designed for specialized workloads such as processing AI data in large-scale computing centers and AI high-level language training.

The DJIA has clearly recognized this shift, highlighting NVIDIA’s growing influence in AI via its graphics processing units (GPU) technology.

The GPU functions as the computing muscle executing tasks, specifically complex calculations and graphic processing necessary for graphics-heavy applications.

Mostly known for designing and manufacturing high-performance GPUs, NVIDIA released its first GPU, NV1, in 1995, and the RIVA 128 in 1997, and the GeForce 256, in 1999.

GPUs are processors engineered for parallel computation, excelling in tasks that require simultaneous processing, such as rendering complex graphics in video games and editing high-resolution videos.

Their architecture also makes them especially suited for AI and machine learning applications, where their ability to rapidly process large datasets substantially reduces the time required for AI model training.

Originally focused on graphics rendering, GPUs have evolved to become essential for a wide range of applications, including creative production, edge computing, data analysis, and scientific computing.

NVIDIA’s RTX 4070 Super GPU, priced between $500 to $600, targets mainstream gamers and content creators seeking 4K resolution.

For more complex workloads, the RTX 4070 Ti Super GPU, priced at around $800, offers higher performance for computing 3D modeling, simulation, and analysis for engineers and other professionals.

Financial experts use GPU technology for analyzing data used in monetary predictions and risk assessments to make better decisions.

Elon Musk launched his new artificial intelligence company, “xAI,” March 9, 2023, with the stated mission to “understand the universe through advanced AI technology.”

The company’s website () announced the release of Grok-2, the latest version of their AI language model, featuring “state-of-the-art reasoning capabilities.”

Grok is reportedly named after a Martian word in Robert A. Heinlein’s science fiction novel “Stranger in a Strange Land,” implying a deep, intuitive understanding – at least he did not name it HAL 9000 (2001 A Space Odyssey), or the M-5 multi-tronic computing system (Star Trek).

Colossus employs NVIDIA’s Spectrum-X Ethernet networking platform, providing 32 petabits per second (Pbps) of bandwidth to handle the data flows necessary for training AI large language models.

It also uses 100,000 NVIDIA H100 GPUs (the H is dedicated to Grace Hopper, a pioneering computer scientist), with plans to expand to 200,000 GPUs, which includes the new NVIDIA H200 models.

The NVIDIA H100 GPU has 80 GB of HBM3 high-bandwidth memory and 3.35 terabytes per second (TB/s) of bandwidth.

The NVIDIA H200 GPU, released in the second quarter of 2024, has 141 GB of HBM3e (extended) memory and 4.8 TB/s bandwidth.

It provides better performance than the H100 in some AI tasks, with up to a 45% increase, has two times faster inference speeds (measures how quickly an AI processes new information and generates results) on large language models, and uses less energy than the H100.

Both the H100 and H200 GPUs are based on the Hopper architecture.

Be sure to read next week’s always-exciting Bits and Bytes for part two.


NVIDIA HGX H200 141GB 700W 8-GPU Board













Elon Musk’s ‘Colossus’ AI training system
 with 100,000 Nvidia chips
(GPU module in the foreground)


Friday, November 8, 2024

The web’s early threads

© Mark Ollig

The internet, our foundational digital network, is similar to the paved highways connecting various destinations.

The web is an application that runs on top of the internet, much like the cars that travel over our highways.

The web operates on top of the internet’s underlying infrastructure using a separate layer of software and protocols (the overlay structure and operability) to enable website interactions.

British computer scientist Tim Berners-Lee presented “Information Management: A Proposal,” March 12, 1989, a recommendation for a distributed hypertext system to his colleagues at the European Organization for Nuclear Research, or CERN, in Geneva, Switzerland.

Tim Berners-Lee did not conceive the idea of hypertext; earlier ideas and developments influenced his work.

In 1945, Vannevar Bush, an American engineer who led the US Office of Scientific Research and Development during World War II, said, “Consider a future device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.”

“As We May Think,” an essay Bush wrote and published in July 1945 in The Atlantic Monthly magazine, describes a personalized information system, an electromechanical device called “Memex” that stores and links information; at that time, the technology was not yet available to build it.

Although never constructed, Memex, a theoretical concept, influenced the development of hypertext and personal computing.

In 1960, Nelson began Project Xanadu to develop a user-friendly way for cross-referencing documents electronically with computers; however, due to technical and coordinating issues, it was never built.

In 1965, Nelson coined the terms “hypertext” and “hypermedia,” as methods for instantly interconnecting reading documents via active links. Hypertexting links words, images, and concepts to related information. For example, while reading a website about coffee, you can click a link to see a map of where beans come from or learn how espresso is made.

During the 1960s, Douglas Engelbart developed the oN-Line System (NLS), which used hypertext-like linking computer coding and introduced graphical computing elements that influenced the development of graphical user interfaces by Xerox and Apple.

“When I first began tinkering with a software program that eventually gave rise to the idea of the World Wide Web, I named it Enquire, short for “Enquire Within Upon Everything,” a musty old book of Victorian advice I noticed as a child in my parents’ house outside London,” Tim Berners-Lee wrote on the first of 226 pages in his book “Weaving the Web” from 1999, which I own.

Berners-Lee said the title, “Enquire Within Upon Everything,” was suggestive of magic, and that the book was a portal to a world of information.

“In 1980, I wrote a program, Enquire, for tracking software in the PS [Proton Synchrotron] control system. It stored snippets of information and linked related pieces. To find information, one progressed via links from one sheet to another,” Berners-Lee said.

During his March 12, 1989, presentation, Berners-Lee diagrammed a flowchart showing CERN network users distributing, accessing, and collaborating on electronic files. Electronic documents would be viewed and modified regardless of the computer model or operating system.

He proposed a generic client user-friendly interface, a “browser” software for a computer user to interact with hypertext data servers.

Berners-Lee compared hypertext to a phone directory, with links connecting information about people, places, and categories using a system where users could access a single database via a distributed file system.

He developed the protocols for linking databases and storing documents across a network.

Tim Berners-Lee and his colleague Robert Cailliau, a Belgian informatics engineer who in 1987 proposed a hypertext system for CERN, collaborated to introduce the phrase “WorldWideWeb” Nov. 12, 1990.

A basic form of the WorldWideWeb software was operating at CERN by Dec. 25, 1990.

Berners-Lee wrote the initial code for Hypertext Transfer Protocol (HTTP), Hypertext Markup Language (HTML), and the first web browser, which he called “WorldWideWeb,” as a tool for sharing information at CERN. He used a NeXTcube computer workstation.

During 1989 to 1991, he designed the World Wide Web, creating HTTP, URL, and HTML coding technologies to allow linked document sharing over the internet.

Public computer users outside CERN could use Berner-Lee’s WorldWideWeb hypertext software over the internet starting Aug. 6, 1991.

In 1993, the internet primarily connected universities, private companies, and government computers.

In late 1992, the Mosaic web browser was being developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. Marc Andreessen and Eric Bina are often credited as the lead developers and driving forces behind it.

The NCSA Mosaic, version 1.0, could be freely downloaded from the NCSA website April 22, 1993.

I remember using NCSA Mosaic on my HP OmniBook 300 laptop computer. Mosaic featured icons, bookmarks, pictures, and a friendly user interface integrating multimedia with text and graphics.

The web, an overlaying program on the internet, operates in harmony with the internet protocols created by Vint Cerf and Robert Kahn.

Tim Berners-Lee’s first hypertext website can be seen at https://bit.ly/48AtVXt.

Today, we are witnessing the World Wide Web evolving. Its threads between information, social media, business, government, and e-commerce sites are rapidly merging with artificial intelligence, so hang on, folks!

A snapshot of the 1993 NCSA Mosaic web browser in use.
(Photo Public Domain, by Charles Severance)


Friday, November 1, 2024

Remember to cast your ‘ballotta’

© Mark Ollig


The word “election” originated in the 13th century from the Anglo-French language, meaning “choice” or “selection.”

Ballotta is the Italian word for the “little ball” used in voting, from which the English word “ballot” is derived.

In the late 1500s, the people of Venice, Italy, would register their vote (in secret) by dropping a specifically marked or colored ball (ballot) into a container. The different colors or markings on the balls represented a particular vote or candidate. The balls were then counted to determine the winning choice.

Around 508 BC, Athens created a system to protect their democracy from tyrants and prevent anyone from gaining excessive power.

Citizens could write the name of someone they believed threatened the stability and well-being of the entire community on a piece of pottery called an ostrakon. If enough votes were cast, the person faced exile. This practice, known as ostracism, is where the word “ostracize” originates.

In 1856, William Boothby, then a sheriff of Adelaide in southern Australia, developed the Australian ballot method.
This method used secret paper ballots on government-issued forms printed with candidates’ names. Voters cast their ballots in an enclosed booth and placed them in a secure ballot box for hand-counting.

Boothby’s Australian secret ballot method spread to Europe and the United States, where voters in Massachusetts first used it during a US presidential election Nov. 6, 1888.

Early voting practices in our country often involved publicly declaring one’s chosen candidate aloud or writing their name on paper, often in front of others. This practice is known as non-secret voting.

Citizens using this “non-secret” ballot method were sometimes intimidated, coerced, or bribed monetarily to cast their vote for a particular candidate.

In 1888, Massachusetts and New York adopted the Australian ballot system statewide, which included government-printed ballots listing all the candidates, rather than having voters write in names or use ballots provided by political parties.

Privacy for ballot marking was ensured using compartmental booths or tables with partitions.

The Minnesota Election Law of 1891 required the use of the Australian ballot system for all general elections.

The January 1891 general statutes of the State of Minnesota says, “The election law of 1891, bringing the entire state under the so-called Australian system of voting in general elections, imposes important duties upon this office, also upon each and every town board and town clerk, all of which must be performed in proper order to secure a valid election.

Under section 44 of said law, each election district must be provided with three ballot boxes for voting, one ballot box painted white, one painted blue, and one painted black. There shall also be provided in each election precinct, two voting booths for every hundred electors registered. There shall also be provided an indelible pencil for each voting booth.”

Our use of the term “voting booth” likely originated from the name William Boothby, although this is not definitively proven. Back in the 19th century, a booth was also considered an enclosed space for animals inside a barn.

Minnesota used the Australian system of voting for the 1892 US presidential election.

A Minneapolis Times newspaper article from Nov. 15, 1892, titled, “Comment on the Australian Ballot System of counting” stated, “The Australian ballot law has its limitations, and those who’ve worked closely with it, like election judges, generally agree that while it’s effective in preventing illegal voting and ensuring ballots are cast secretly, it falls short when it comes to counting those ballots.”

Jacob Hiram Myers (1841 to 1920) obtained US Patent 415,549, titled “Voting Machine,” Nov. 19, 1889, which was the first mechanical lever voting machine.

In 1890, he founded Myers American Ballot Machine Company, and his voting machines were first used in Lockport, NY, in 1892 for a town election.

Unfortunately, Myers’ voting machines encountered significant problems during the Rochester, NY election of 1896, after which his company closed.

By the 1930s, improved models of mechanical lever voting machines were being used in many US cities; however, they were subjected to various problems, including being tampered with, and by 1982, most US production of these machines had ended.

Reading ballots using an optical mark-sense scanning system was first used in 1962 in Kern City, CA.

The Norden Division of United Aircraft and the City of Los Angeles designed and built this ballot reading method, which was also used in Oregon, Ohio, and North Carolina.

In the 1964 presidential election, voter jurisdictions in two states, California and Georgia, used punch cards and computer tabulation machines.

The 2000 presidential election is remembered for Florida’s punch card ballots and their “hanging chads” recount.

US Patent 3,793,505 was granted for “the Video Voter” Feb. 19, 1974. The abstract described it as “An electronic voting machine including a video screen containing the projected names of candidates or propositions being voted.”

The video voter was used in Illinois in 1975 and is considered the first direct-recording electronic voting machine used in an election.

Today, Minnesota ballot tabulators use optical scanner equipment to read and record the ballot vote for each candidate. Companies providing the state’s voting equipment include Dominion Voting Systems, Election Systems & Software (ES&S), and Hart InterCivic (Hart Verity).

Be sure to exercise your right to vote.

im


image depicting a New York polling place from 1900 showing voting booths on the left. The image is public domain and is from the 1912 History of the United States, volume V. Charles Scribner's Sons, New York.