Tweet This! :)

Friday, December 27, 2024

A year of column memories

“Mark, as you get older, time goes by faster,” are the words my mother once told me.

As this is my final column for 2024, I decided to write a recap of the topics we explored.

“A Silent Cinema Journey” – Jan. 4.

Exploring the silent film era in the 1920s.

“Louise Brooks: A Silent Starlit Journey” – Jan. 12.

A biography of silent film star Louise Brooks – my favorite flapper!

“GUI: The Computer Game Changer” – Jan. 19.

A look at the evolution and impact of the graphical user interface (GUI).

“Xerox versus Apple: ‘Star’ Wars” – Jan. 26.

Discusses the 1989 lawsuit where Xerox sued Apple for allegedly copying elements of its GUI.

“Untethered: Embracing the Wireless Life” – Feb. 9.

Tracing the shift from wired to wireless technology.

“‘Zero-G and I Feel Fine’” – Feb. 16.

Recounted John Glenn’s historic orbit of the Earth in 1962.

“From Bell Labs: A ‘Telephone with Eyes’” – Feb. 23.

Bell Labs’ role in early television technology in the 1920s.

“Spider: A Lunar Rehearsal in Earth Orbit” – March 1.

The Apollo 9 mission in 1969, and the rendezvous and docking of the lunar module in Earth’s orbit.

“An AI-Quantum Paradigm Shift is Underway” – March 8.

Examining the convergence of artificial intelligence (AI) and quantum computing.

“Telediagraph: Sending, Receiving Images via Telegraph” – March 15.

The story of Ernest A. Hummel and his invention of the telediagraph.

“Sketchpad’s ‘Whirlwind’ Graphical Interaction” – March 22.

Reviewed the Whirlwind I computer and its impact on computer graphics.

“Protecting the US During the Cold War” – March 28.

Reviewed the US Semi-Automatic Ground Environment (SAGE) air defense system and the Duluth SAGE Direction Center.

“Text-to-Video: OpenAI’s Sora” – April 5.

A look at Sora, a text-to-video AI model developed by OpenAI.

“The Personal Computer Stepping Stone” – April 12.

Shared the story of the Sol-20 computer, a pioneering personal computer developed in 1976.

“The Space Shuttle Enterprise” – April 19.

Recounted the development and testing of the space shuttle Enterprise.

“Sipping Coffee and Reminiscing” – April 26.

Review of the Apollo Guidance Computer and my purchase of a Hewlett-Packard OmniBook 4000CT notebook computer in 1995.

“Blast into the Past: Dr. Sbaitso” – May 3.

I revisited Dr. Sbaitso, a virtual therapist program from 1991 that I used on a personal computer.

“The IBM 1410 Computer” – May 10.

Provided an overview of the IBM 1410 computer, a second-generation computer introduced in 1960.

“Minnesota’s Role as the Computing Heartland” –May 17.

Looked back at the history of computing in Minnesota.

“The Saturday Morning Emergency Broadcast” –May 24.

Recounted the nationwide accidental triggering of the Emergency Broadcast System (EBS) in 1971.

“NORAD: Guardians of North America” – May 31.

Detailed the history of the North American Aerospace Defense Command (NORAD).

“Alexander Bain, the Clockmaker Who Electrified Time” – June 7.

The story of Alexander Bain, a Scottish inventor who developed the electric clock in 1840.

“A Collaborative Telegraph Network” – June 13.

My column on the development of the telegraph in the 19th century.

“Will the US Electric Grid Handle AI’s Growth?” – June 21.

I raised concerns about the increasing energy demand for AI.

“Internet Archive: A Treasure Trove of Digital Nostalgia” – June 28.

My experiences with the Internet Archive’s Wayback Machine.

“Wish I Could Get a Brick or Two” – July 5.

A reflection on the demolition of my childhood elementary school.

“From Sand to Silicon Wafers” – July 12.

Explored the process of creating microchips.

“Mission Control: ‘Stay or No-Stay’” – July 19.

Recounted the tense moments during the Apollo 11 moon landing.

“GPS: Part One” – July 26.

Investigated the development of the Global Positioning System (GPS).

“GPS: Part Two” – Aug. 2.

The conclusion from July 26.

“AI: Trust, But Verify” – Aug. 9.

Explored the benefits and challenges of generative AI.

“The Newsroom’s Unsung Hero” – Aug. 16.

I paid tribute to the teletype machine.

“From Bits to Petabits per Second” – Aug. 23.

The progression of data transmission speeds.

“The Visionaries Shaping Our Digital World” – Aug. 30.

A column about Roger Fidler and Alan Kay, who decades ago envisioned tablet computers.

“Smartphones (and Radios) in the Classroom” – Sept. 6.

Examined the evolving role of technology in the classroom.

“Reaching the Moon: 65 Years Ago” – Sept. 13.

Commemorated the 65th anniversary of the first spacecraft to reach the moon.

“Find it with a Smart Tag” – Sept. 20.

Exploring the world of smart tags.

“QR Codes: Mysterious Square 2D Patterns” – Sept. 27.

Looked at the history and applications of Quick Response (QR) codes.

“The Early Days of Minnesota Television” – Oct. 4.

Reviewed the history of television in Minnesota.

“RCA’s ‘All Shook Up’ Journey” – Oct. 11.

Traced the history of the Radio Corporation of America (RCA).

“‘Air Mail’ within a Tube Network” – Oct. 18.

A look at pneumatic mail delivery systems.

“My ‘Additive Manufacturing’ Journey” – Oct. 25.

A column about my personal experience with 3D printing.

“Remember to Cast Your ‘Ballotta’” – Nov. 1.

Explored the history of assorted voting methods.

“The Web’s Early Threads” – Nov. 8.

Looked back at the origins of the World Wide Web.

“Accelerating the Future: Supercomputing, AI, Part One” – Nov. 15.

Discussed the growing importance of AI in the tech industry.

“AI Engines: GPUs and Beyond” – Nov. 22.

Explored the role of graphics processing units (GPUs) in powering AI.

“The ‘Early Bird’ Still Soars” – Nov. 29.

The story of NASA’s INTELSAT 1 satellite.

“The Mysterious Miniature ‘Space Shuttle’” – Dec. 6.

My take on the mysterious X-37B, an uncrewed, reusable US spaceplane.

“A Bright Idea: An Electrically Lit Christmas Tree” – Dec. 13.

An illuminating history of Christmas tree lights.

“Jessica’s Question: A Christmas Tale” – Dec. 20.

A story on how Santa Claus uses technology.

“A year of Memories and Columns “ – Dec. 27.

Today’s column.

It is nearly time to say goodbye to 2024; Mom was right.

I created this image using Gemini Advanced AI 

Friday, December 20, 2024

Jessica’s question: a Christmas tale

© Mark Ollig

On Dec. 22, 2008, I wrote a special Christmas column about a young person named Jessica who asked a question about Santa Claus.

I remember my mother enjoying that column, and in her memory, I am republishing it today with a few modifications.

The column started with Jessica asking, “Does Santa Claus use a computer?”

All right, Jessica, I emailed my list of North Pole contacts and found one elf from Santa’s North Pole Toy Workshop who investigated your question.

Finarfin Elendil moonlights as a freelance journalist with the North Pole Frosty newspaper during the Christmas offseason.

He informed me that the jolly old elf with a white beard, a broad face, and a little round belly that shook when he laughed like a bowl full of jelly is very computer savvy.

The computer located at the Claus Computer Center (CCC) is cleverly concealed beneath the North Pole’s main toy-making factory. According to Elendil, it oversees the operation of Santa’s key toy production facilities.

The CCC uses state-of-the-art North Pole (NP) quantum computer technology to analyze the wish lists submitted by Santa effectively.

Sophisticated software ensures that gifts for all the good girls and boys are processed quickly, streamlining delivery routes to ensure smooth and timely delivery of toys using Santa’s airborne sled, code-named Sleigh-One.

Sleigh-One is more than a flying wooden toboggan; it features an onboard mini-computer networked in real time with the CCC, providing Santa with up-to-date information.

A 3D holographic display on Sleigh-One shows Global Positioning System data.

One display monitors the reindeer’s speed and altitude, in MPR (miles per reindeer) of Sleigh-One’s reindeer-powered output of Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blixen, and, of course, because of his bright, shiny red nose, Rudolph, the “Red-Nosed Reindeer,” is officially designated by Santa as “Reindeer One.”

Sleigh-One also features another display that shows the number of presents delivered along with cup holders Santa and Mrs. Claus (when traveling with him) use to hold their eggnog.

The telemetry data it receives from the CCC maps and computes the coordinates for every rooftop fireplace worldwide that Santa descends into to deliver presents.

Elendil explained if a home doesn’t have a chimney, Santa’s computer kicks into gear and activates the “back door” software program using magical algorithms for Christmas present deliveries.

Of course, there can be glitches.

One time, Elendil reported the software program filled a neighbor’s refrigerator with pickled herring and even filled a chimney with fruitcake! But now, he said, everything works perfectly.

When I was a child, one of my favorite Christmas TV specials was “Rudolph the Red-Nosed Reindeer,” which first aired back in 1964.

One highlight of the special was Christmas Eve when a major blizzard hit the North Pole.

Santa realized he could not navigate his reindeer sleigh through the storm, leading him to consider canceling Christmas.

However, a young reindeer named Rudolph had a very bright red nose that would, as Santa said, “cut through the murkiest storm they can dish up.”

That Christmas Eve, as the doors to the North Pole’s Toy Workshop opened, the heavy blowing snow rushed towards Sleigh-One, which was ready to deliver presents throughout the world.

Santa: “Ready, Rudolph?”

Rudolph: “Ready, Santa!”

Santa: “Well, let’s be on our way. OK, Rudolph. Full power!”

Returning to Jessica’s question, Elendil reported the North Pole’s supercomputer confirmed that there are approximately 131 million households in the U.S.

The world’s population is nearly 8 billion, with homes scattered across 196.9 million square miles of Earth’s surface.

Further calculations, taking into account densely populated areas, reveal an average distance of 0.1 miles between homes.

To deliver all the Christmas presents in a single night, Sleigh-One averages a cruising speed of nearly 1.3 million miles per hour!

Elendil shared the story of when Dasher asked Santa if the sleigh could travel the speed of light, which would be 186,000 miles per second and 671 million miles per hour.

Santa explained that if he traveled that fast, Rudolph’s nose light would trail behind the sleigh like a comet’s tail, and they would enter a time warp, traveling backward in time and delivering presents before Christmas.

Therefore, instead of reaching light speed, they used a specially designed sleigh equipped with magical transwarp-time drive capabilities.

Well, Jessica, I hope you found this story fun to read; I sure enjoyed writing it.

Christmas originates from the Old English phrase “Cristes maesse,” meaning “Christ’s Mass.” This phrase first appeared in historical documents around 1038 AD. It evolved into the Middle English “Christemasse” and the modern “Christmas.”

As we grow older and face the challenges life presents, it’s important to hold onto the magical memories of Christmas — even amid our challenges, there is still room for wonder and joy.

I wish you all a very Merry Christmas.

Finarfin Elendil, who moonlights as a freelance journalist with the North Pole Frosty newspaper.
I created the image using the Meta AI artificial intelligence program (AI Imagined), which 
generated the image based on my text prompts.

Friday, December 13, 2024

A bright idea: an electrically lit Christmas tree

© Mark Ollig

Up until the late 19th century, Christmas trees were decorated with garlands of popcorn, homemade ornaments, and edible treats like berries, nuts, cookies, and fruits.

Lighted candles were also used on Christmas trees, but due to fire hazards, a bucket of water was often kept nearby.

In 1879, Thomas Edison developed a practical incandescent light bulb at his Menlo Park, NJ, laboratory.

By the Christmas season of 1880, he was demonstrating his electric lighting system, stringing bulbs outside his lab for the nearby railroad passengers to see.

The New York Times reported Dec. 21, 1880, that New York City officials visited Thomas Edison’s Menlo Park laboratory, where they were impressed by his electric lighting. The article noted a walkway illuminated with 290 electric lamps, “which cast a soft and mellow light on all sides.”

In December of 1882, Edward H. Johnson, an associate of Thomas Edison and the vice president of the Edison Electric Illuminating Company, created the first electrically illuminated Christmas tree.

He hand-wired a string of 80 red, white, and blue electric light bulbs on the tree in the parlor room of his home.

Johnson placed the illuminated Christmas tree on a slowly rotating platform (powered by a direct-current electric dynamo generator) next to a window so the tree was visible from the street.

After visiting Johnson’s home on West 12th Street and seeing his electrically lit Christmas tree, William Augustus Croffut, a journalist for the Detroit Post and Tribune, wrote a Dec. 22, 1882, newspaper article saying, “Last evening, I walked over beyond Fifth Avenue and called at the residence of Edward H. Johnson, vice-president of Edison’s electric company. There, at the rear of the beautiful parlors, was a large Christmas tree, presenting a most picturesque and uncanny aspect.”

He went on to describe the electrical lights on the Christmas tree, “It was brilliantly lighted with many colored globes about as large as an English walnut and was turning some six times a minute on a little pine box. There were 80 lights in all encased in these dainty glass eggs, and about equally divided between white, red, and blue.”

In the following years, Johnson continued to refine his Christmas tree display, increasing the number of lights.
An 1884 New York Times article said of his Christmas tree, “It stood about six feet high, in an upper room, and dazzled persons entering the room. There were 120 lights on the tree, with globes of different colors.”

Despite the public’s fascination with Johnson’s colorful bulbs, the transition from traditional candles on Christmas trees was gradual, slowed by high costs and limited access to electricity in smaller cities and rural areas.

In 1895, during President Grover Cleveland’s presidency, electric lights were placed on the indoor White House tree, which helped popularize the practice of decorating Christmas trees with electricity.

The Dec. 6, 1901, issue of the Brooklyn Daily Times featured an advertisement from the Edison Electric Illuminating Co. of Brooklyn offering miniature electric lamps for Christmas tree lighting, available for purchase or rental. The ad states wiring could be easily arranged if there was electric service in the building.

In 1903, General Electric, which had purchased Edison’s lightbulb factory in 1890, began selling the first Christmas tree lighting kits. These kits included a string of eight varied-colored glass bulbs and a connector for an electrical socket.

The kit cost $12 (about $430 today) and was marketed as a safer, easier way to light a Christmas tree, though the bulbs still posed some fire risk due to their heat.

In 1917, 15-year-old Albert Sadacca, along with his 22-year-old brother Leon and younger brother Henri, began marketing affordable electrically-powered Christmas lights through their family’s Ever-Ready Light Company.

The Nov. 28, 1920, Minneapolis Journal newspaper featured a Peerless Electrical Company advertisement, which advertised a “new kind of tree lighting set” from General Electric called the GE Christmas Arborlux, featuring smaller translucent lamp bulbs.

President Calvin Coolidge turned on the switch of the first officially recognized outdoor national Christmas tree Dec. 24, 1923.

The 48-foot balsam fir, trimmed with 2,500 red, white, and green bulbs, was lit on the Ellipse located south of the White House.

The ceremony drew more than 6,000 visitors and featured Christmas carols and a performance by the US Marine Band.

The lighting of the first national Christmas tree marked the beginning of a tradition that continues to this day.

Today’s Christmas lights mainly feature LEDs (light-emitting diodes), which are safer and more energy-efficient than older incandescent bulbs, like the cone-shaped glass ones I remember from my youth.

In the future, “smart” Christmas trees may feature lights enhanced by nanotechnology, bioluminescence, holographic projections, and immersive AI-augmented reality powered by excess heat in a room using energy-harvesting technology.

They will no doubt provide a unique and immersive Christmas experience.

Stay tuned.


Friday, December 6, 2024

The mysterious miniature ‘space shuttle’

© Mark Ollig

The X-37B is an uncrewed spaceplane shrouded in secrecy with a design resembling a miniature NASA Space Shuttle.

NASA initiated the X-37 program in 1999, aiming to develop a reusable space transportation system.

This program encompassed the X-37B, a small, autonomous spacecraft designed for operation in low Earth orbit.

In 2004, the X-37B program was transferred from NASA to the US Air Force. 

Today, operational control of the X-37B resides with the US Space Force, which assumed responsibility in 2020.

This American-made X-37B is a reusable orbital spaceplane that is 29 feet in length, 15 feet in wingspan, and 9.5 feet in height.

Its weight remains officially classified, although various sources suggest it is around 11,000 pounds.

The X-37B is designed for vertical takeoffs and is launched encased within a protective fairing shell atop a rocket. 

After completing a mission, the spaceplane lands on a runway, similar to NASA’s space shuttle.

The US Air Force’s official website states that the X-37B is a reusable experimental spacecraft for conducting orbital experiments and advancing future US space technologies.

The payloads and experiments conducted using the X-37B are mainly classified.

The X-37B’s designation follows the US military’s experimental aircraft naming convention. The “X” denotes experimental, “37” is a sequential number, and “B” indicates the second iteration.

Note that not all numbers in the one through 36 sequence correspond to actual aircraft due to canceled projects and designation changes.

The Orbital Test Vehicle-1 (OTV-1), also known as USA-212, had its first flight as the Boeing X-37B when it was launched aboard an Atlas V 501 rocket from Cape Canaveral, FL.

The Atlas V 501 rocket, operated by United Launch Alliance, stood at 191.3 feet tall, had a diameter of 12.5 feet, and weighed approximately 1.3 million pounds.

The rocket’s RD-180 liquid-fueled engine produced 860,000 pounds-force of thrust.

The Atlas V 501’s Centaur upper stage was powered by an RL10A-4-2 liquid-fueled engine, producing 22,300 pounds-force of thrust.

The X-37B’s first mission, lasting 224 days, tested technologies such as guidance, navigation, and control systems; thermal protection; satellite sensors; autonomous orbital operation; and re-entry and landing capabilities.

The X-37B reportedly operated in a low earth orbit at an approximate altitude range of 150 to 500 miles above the planet.

The X-37B contributes to Space Domain Awareness (SDA), which entails monitoring space activities and objects, including satellites and debris, that could impact US operations.

SDA helps the Space Force identify potential orbital threats and ensure safe operations for US spacecraft and satellites.

The X-37B operates in low Earth orbit, gathering data on satellites, debris, and other space activities. 

Its secrecy is driven by national security, the pursuit of technological advantage, and protection against adversaries.

General information about launches, non-classified orbital details, and landings is typically made public; however, specific mission objectives, payload details, and most experiment specifics remain confidential.

The X-37B tests various new space technologies, including advanced reusable spacecraft systems, autonomous navigation and control systems, and novel propulsion technologies.

Today, the Air Force’s Rapid Capabilities Office is involved in the program’s development, while the US Space Force oversees on-orbit operations.

The launch of the X-37B OTV-6 mission was May 17, 2020, from Cape Canaveral Space Force Station, FL, aboard an Atlas V 501 rocket.

A service module was introduced on the X-37B with the OTV-6 mission, enabling the spacecraft to carry out more experiments, store additional fuel, increase the spacecraft’s orbital range and duration, and support complex maneuvers.

The module also facilitates payload deployment and retrieval, hosting experiments like the Naval Research Laboratory’s Photovoltaic Radiofrequency Antenna Module and the US Air Force Academy’s FalconSat-8.

After spending 908 days in orbit, the X-37B landed at NASA’s Kennedy Space Center Nov. 12, 2022, ending the OTV-6 mission.

The seventh mission of the X-37B, OTV-7 (Orbital Test Vehicle-7) spacecraft launched on a SpaceX Falcon Heavy rocket, designated USSF-52, from Cape Canaveral Space Force Station Dec. 28, 2023.

The choice of the Falcon Heavy rocket for this mission may have been needed due to the X-37B’s increased payload capacity to carry more fuel, conduct more experiments, or achieve higher orbits.

The US Air Force announced that the X-37B began a series of aerobraking maneuvers Oct. 10 of this year.

These maneuvers are used to modify the spacecraft’s orbit by slowing it down with Earth’s atmosphere, which saves fuel and enables extended missions.

The secretive miniature ‘space shuttle,’ the X-37B, is currently in Earth orbit. It will continue its mission before eventually de-orbiting and returning to Earth, as it has done successfully in its previous six missions.

While many details about the X-37B remain classified, it continues to intrigue and generate speculation.

The X-37B remains a classified conundrum wrapped in an enigma.

After eight months in space, a Chinese “reusable experimental spacecraft,” landed in the Gobi Desert region located in northern China Sept. 6.

While not officially confirmed to be a direct response to the X-37B secretive spaceplane program, it does seem to signal a strategic response to advancements in US spaceplane technology.




Friday, November 29, 2024

The ‘Early Bird’ still soars

© Mark Ollig


In October 1945, science fiction writer Arthur C. Clarke foretold the use of space satellites in an article titled “Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?”

He wrote it for Wireless World, an engineering and technology magazine published in the United Kingdom.

Clarke proposed a system of “extra-terrestrial relays” and used the term “space stations” to describe the concept of artificial satellites orbiting the Earth for communication purposes.

He proposed positioning three satellites 26,000 miles above Earth to provide continuous global radio coverage.

Each satellite would act like a communication relay. It would pick up signals from anywhere within its coverage area on Earth and then broadcast those signals to other locations within its hemisphere.

Clarke also addressed the need for powerful rockets to place these satellites into orbit, stating, “The development of rockets sufficiently powerful to reach orbital, and even [earth gravitational] escape velocity is now only a matter of years.”

President Kennedy signed the Communications Satellite Act Aug. 31, 1962, leading to the creation of the Communications Satellite Corporation (COMSAT), which the US Congress authorized to establish a global commercial satellite communication system.

In 1964, COMSAT played a pivotal role in the establishment of the International Telecommunications Satellite Organization (INTELSAT) to oversee communications satellites for providing telephone, television, and data transmission services on a global scale.

NASA successfully launched the INTELSAT 1 F-1 satellite, named INTELSAT 1, atop a three-stage Delta rocket from Complex 17A at Cape Kennedy, FL, April 6, 1965.

It was nicknamed “Early Bird” from the saying, “The early bird catches the worm.”

It was the first satellite launched and operated by an intergovernmental consortium called INTELSAT, founded in 1964.

Hughes Aircraft Company built INTELSAT 1 for COMSAT, which was the first commercial communications satellite in geosynchronous orbit.

At an altitude of 22,300 miles, the satellite, spinning at 152 revolutions per minute, was positioned over the equator at 28° west longitude in a synchronous equatorial orbit over the Atlantic Ocean.

Early Bird matched Earth’s orbital speed, allowing it to hover above the planet.

Ground stations adjusted their antennas for a direct line of sight to the satellite, ensuring uninterrupted data transmission between North America and Europe.

This 85-pound, cylindrical satellite (28 inches in diameter, 23 inches tall) used solar cells to power its electronics, which included two six-watt transponders operating on a 50 MHz bandwidth.

The Early Bird could handle 240 simultaneous transatlantic phone calls, telegraph and facsimile transmissions, and television broadcasts.

Early Bird transmitted television coverage of the Gemini 6 spacecraft splashdown Dec. 16, 1965, with astronauts Thomas Stafford and Walter Schirra onboard.

I first became curious about the Early Bird satellite while watching a YouTube video of heavyweight boxing champion Muhammad Ali fighting Cleveland Williams Nov. 14, 1966.

“I’d like at this time to compliment the thousands of people in the United Kingdom, who, where it is nearly four-o’clock, are jamming the theaters over there to see our telecast via the Early Bird satellite,” announced boxing commentator Don Dunphy.

The Early Bird satellite used one channel to broadcast television programs between the two continents, ushering in a new era of live international television.

The phrase ‘live via satellite’ emerged during this era of live trans-Atlantic televised broadcasts via satellite.

The success of Early Bird proved the practicality of using synchronous orbiting space satellites for commercial communications.

Early Bird ceased operation in January 1969; however, it was reactivated in July of that year when a communications satellite assigned to the Apollo 11 moon mission failed.

In August 1969, the INTELSAT 1-F1 satellite, the Early Bird, was deactivated.

In 1990, INTELSAT briefly reactivated Early Bird to commemorate the satellite’s 25th anniversary.

According to NASA, as of today, “Early Bird is currently inactive.”

An INTELSAT video of Early Bird’s April 6, 1965, launch from Cape Canaveral, FL, can be seen at https://tinyurl.com/y4uvqn2s.

In the video, look for two famous Minnesotans: Hubert Humphrey, vice president of the United States, then chairman of the National Aeronautics and Space Council, and Sen. Walter Mondale, who witnessed (via close circuit TV) the rocket launch of the Early Bird.

LIFE magazine had an article about the satellite May 7, 1965, cleverly titled, “The Early Bird Gets the Word.”

In 1965, the word was the Early Bird, which reminds me of the Minneapolis garage band The Trashman’s 1963 hit “Surfin’ Bird” with its lyrics “A-well-a don’t you know about the bird? Well, everybody knows that the bird is a word.”

Arthur C. Clarke’s 1945 article can be read at:https://bit.ly/Clarke1945.

The full 68-page October 1945 Wireless World magazine can be read at: https://bit.ly/3YYc48r.

The 59-year-old Early Bird satellite still soars approximately 22,300 miles above us in a geosynchronous orbit. Its International Designator Code is 1965-028A.

The satellite tracking website n2yo.com shows the Early Bird’s location in real-time; see it at https://bit.ly/3gLNTSB.

In 2025, INTELSAT will be donating a full-sized replica of the INTELSAT 1 satellite, aka the Early Bird, to the Smithsonian’s National Air and Space Museum in Washington, DC.



Friday, November 22, 2024

AI engines: GPUs and beyond

© Mark Ollig

Founded on April 5, 1993, NVIDIA Corp. is headquartered in Santa Clara, CA.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry.

Initially, NVIDIA focused on developing graphics processing units (GPUs) for the computer gaming market.

Supercomputing graphics cards rely on specialized electronic circuits, primarily the GPU, to perform a large number of calculations.

These circuits act as the “computing muscle,” allowing the card to handle complex graphics rendering, AI processing, and other workloads.

GPUs execute various computing programs, specifically complex calculations, and accelerate the processing of applications with high graphical processing needs and videos.

Graphics cards are used for computer gaming, self-driving cars, medical imaging analysis, and artificial intelligence (AI) natural language processing.

The growing demand for large language AI models and applications has expanded NVIDIA’s graphics card sales.

The NVIDIA H200 Tensor Core GPU datasheet states it is designed for generative AI and high-performance computing (HPC). It is equipped with specialized processing units designed to enhance performance in AI computations and matrix operations.

It is a powerful graphics card designed to enhance AI and high-performance computing (HPC) tasks.

The NVIDIA H200 Tensor Core GPU features 141 GB of High-Bandwidth Memory 3 Enhanced (HBM3e), ultra-fast memory technology enabling rapid data transfer of large AI language models and scientific computing tasks.

The H200 is the first graphics card to offer HBM3e memory, providing 1.4 times more memory bandwidth compared to the H100 and nearly double the data storage capacity.

It also has a memory bandwidth of 4.8 terabytes per second (TB/s), which can transport large amounts of data over a network.

This memory bandwidth significantly increases the computing capacity and performance, improving scientific computing to allow researchers to work more efficiently.

It is based on NVIDIA’s Hopper architecture, which enhances GPU performance and efficiency for AI and high-performance computing workloads. It is named after computer scientist Grace Hopper.

AI uses “inference” to understand new information and make decisions. To do this quickly, AI can use multiple computers in the “cloud” (computers connected over the internet).

The H200 boosts inference speed to twice the levels as compared to H100 graphics cards for large language models like Meta AI’s Llama 3.

The higher memory bandwidth ensures faster data transfer, reducing bottlenecks in complex processing tasks.

It is designed to process the increasingly large and complex data processing needs of modern AI technology.

As these tasks become more complex, GPUs need to become more powerful and efficient. Researchers are exploring several technologies to achieve this.

Next-generation memory systems, like 3D-stacked memory, which layers multiple memory cells, will enhance computing data transfer speeds.

High-Performance Computing (HPC) leverages powerful computers to solve complex challenges in fields like scientific research, weather forecasting, and cryptography.

Generative AI is a technology for creating writing, pictures, or music. It also enhances large language models, those capable of understanding and creating text content resembling human origination.

Powerful GPUs generate significant heat and require advanced cooling.

AI optimizes GPU performance by adjusting settings, improving efficiency, and extending their lifespan. Many programs use AI to fine-tune graphics cards for optimal performance and energy savings.

Quantum processing uses the principles of quantum mechanics to solve complex problems that are too difficult to address using traditional computing methods.

Neuromorphic computing, represented by its spiking neural networks, seeks to duplicate the efficiency and learning architectures inspired by the human brain.

As GPUs push the limits of classical computing, quantum computing is emerging with QPUs (quantum processing units) at its core.

QPUs use quantum mechanics to solve problems beyond the reach of even the most powerful GPUs, with the potential for breakthroughs in AI and scientific research.

Google Quantum AI Lab has developed two quantum processing units: Bristlecone, with 72 qubits, and Sycamore, with 53 qubits.

While using different technologies, QPUs and GPUs may someday collaborate in future hybrid computing systems, leveraging their strengths to drive a paradigm shift in computing.

Google’s Tensor Processing Units (TPUs) specialize in deep learning tasks, such as matrix multiplications.

Other key processors fueling AI computing include Neural Processing Units (NPUs), which accelerate neural network training and execution, and Field-Programmable Gate Arrays (FPGAs), which excel at parallel processing and are essential for customizing AI workloads.

Additional components utilized for AI processing are Application-Specific Integrated Circuits (ASICs) for tailored applications, System-on-Chip designs integrating multiple components, graphics cards leveraging GPU architecture, and Digital Signal Processors (DSPs).

These components are advancing machine learning, deep learning (a technique using layered algorithms to learn from massive amounts of data), natural language processing, and computer vision.

GPUs, TPUs, NPUs, and FPGAs are the “engines” fueling the computational and processing power of supercomputing and artificial intelligence.

They will likely be integrated with and work alongside quantum processors in future hybrid AI systems.

I still find the advancements in software and hardware technology that have unfolded throughout my lifetime incredible.
I used Meta AI's large language model program
(powered by Llama 3) and its text-to-image generator
 (AI Imagined) to create the attached image of two
 people standing in front of a futuristic "AI Quantum Supercomputer."
The image was created using my text input, and the AI created the
image with no human modifications. 






Friday, November 15, 2024

Accelerating the future: supercomputing, AI, part one

© Mark Ollig


Intel Corp. (INTC), founded in 1968, has been the major semiconductor sector representative on the Dow Jones Industrial Average (DJIA) since 1999, that is, until this month.

The DJIA, established May 26, 1896, with 12 companies, is today an index tracking the stock performance of 30 major companies traded on the US stock exchange.

In 1969, Intel partnered with Busicom, a Japanese calculator manufacturer founded in 1948, to develop a custom integrated circuit (IC) for its calculators.

This contract led to the creation of the Intel 4004, the first commercial microprocessor to be used in 1971 for Busicom’s 141-PF calculator; the 4004 combined the central processing unit (CPU), memory, and input and output controls on a single chip.

By integrating transistors, resistors, and capacitors, ICs like the Intel 4004 revolutionized the electronics industry, enabling the design of smaller, more powerful devices. Intel designs and manufactures ICs, including microprocessors, which form the core of a CPU.

Today’s Intel CPUs are supplied to computer manufacturers, who integrate them into laptops, desktops, and data centers for cloud computing and storage.

The Dow Jones recently replaced Intel with NVIDIA (NVDA); a technology company founded in 1993.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry (interestingly, their green logo hints at this, symbolizing the ‘green with envy’ competitors).

NVIDIA has outpaced Intel in the area of artificial intelligence (AI) hardware; however, Intel has strengthened its AI capabilities with their 13th Gen Core Ultra processor series and Gaudi 3.

Gaudi 3 is a data center-focused AI accelerator designed for specialized workloads such as processing AI data in large-scale computing centers and AI high-level language training.

The DJIA has clearly recognized this shift, highlighting NVIDIA’s growing influence in AI via its graphics processing units (GPU) technology.

The GPU functions as the computing muscle executing tasks, specifically complex calculations and graphic processing necessary for graphics-heavy applications.

Mostly known for designing and manufacturing high-performance GPUs, NVIDIA released its first GPU, NV1, in 1995, and the RIVA 128 in 1997, and the GeForce 256, in 1999.

GPUs are processors engineered for parallel computation, excelling in tasks that require simultaneous processing, such as rendering complex graphics in video games and editing high-resolution videos.

Their architecture also makes them especially suited for AI and machine learning applications, where their ability to rapidly process large datasets substantially reduces the time required for AI model training.

Originally focused on graphics rendering, GPUs have evolved to become essential for a wide range of applications, including creative production, edge computing, data analysis, and scientific computing.

NVIDIA’s RTX 4070 Super GPU, priced between $500 to $600, targets mainstream gamers and content creators seeking 4K resolution.

For more complex workloads, the RTX 4070 Ti Super GPU, priced at around $800, offers higher performance for computing 3D modeling, simulation, and analysis for engineers and other professionals.

Financial experts use GPU technology for analyzing data used in monetary predictions and risk assessments to make better decisions.

Elon Musk launched his new artificial intelligence company, “xAI,” March 9, 2023, with the stated mission to “understand the universe through advanced AI technology.”

The company’s website () announced the release of Grok-2, the latest version of their AI language model, featuring “state-of-the-art reasoning capabilities.”

Grok is reportedly named after a Martian word in Robert A. Heinlein’s science fiction novel “Stranger in a Strange Land,” implying a deep, intuitive understanding – at least he did not name it HAL 9000 (2001 A Space Odyssey), or the M-5 multi-tronic computing system (Star Trek).

Colossus employs NVIDIA’s Spectrum-X Ethernet networking platform, providing 32 petabits per second (Pbps) of bandwidth to handle the data flows necessary for training AI large language models.

It also uses 100,000 NVIDIA H100 GPUs (the H is dedicated to Grace Hopper, a pioneering computer scientist), with plans to expand to 200,000 GPUs, which includes the new NVIDIA H200 models.

The NVIDIA H100 GPU has 80 GB of HBM3 high-bandwidth memory and 3.35 terabytes per second (TB/s) of bandwidth.

The NVIDIA H200 GPU, released in the second quarter of 2024, has 141 GB of HBM3e (extended) memory and 4.8 TB/s bandwidth.

It provides better performance than the H100 in some AI tasks, with up to a 45% increase, has two times faster inference speeds (measures how quickly an AI processes new information and generates results) on large language models, and uses less energy than the H100.

Both the H100 and H200 GPUs are based on the Hopper architecture.

Be sure to read next week’s always-exciting Bits and Bytes for part two.


NVIDIA HGX H200 141GB 700W 8-GPU Board













Elon Musk’s ‘Colossus’ AI training system
 with 100,000 Nvidia chips
(GPU module in the foreground)