Tweet This! :)

Friday, December 20, 2024

Jessica’s question: a Christmas tale

© Mark Ollig

On Dec. 22, 2008, I wrote a special Christmas column about a young person named Jessica who asked a question about Santa Claus.

I remember my mother enjoying that column, and in her memory, I am republishing it today with a few modifications.

The column started with Jessica asking, “Does Santa Claus use a computer?”

All right, Jessica, I emailed my list of North Pole contacts and found one elf from Santa’s North Pole Toy Workshop who investigated your question.

Finarfin Elendil moonlights as a freelance journalist with the North Pole Frosty newspaper during the Christmas offseason.

He informed me that the jolly old elf with a white beard, a broad face, and a little round belly that shook when he laughed like a bowl full of jelly is very computer savvy.

The computer located at the Claus Computer Center (CCC) is cleverly concealed beneath the North Pole’s main toy-making factory. According to Elendil, it oversees the operation of Santa’s key toy production facilities.

The CCC uses state-of-the-art North Pole (NP) quantum computer technology to analyze the wish lists submitted by Santa effectively.

Sophisticated software ensures that gifts for all the good girls and boys are processed quickly, streamlining delivery routes to ensure smooth and timely delivery of toys using Santa’s airborne sled, code-named Sleigh-One.

Sleigh-One is more than a flying wooden toboggan; it features an onboard mini-computer networked in real time with the CCC, providing Santa with up-to-date information.

A 3D holographic display on Sleigh-One shows Global Positioning System data.

One display monitors the reindeer’s speed and altitude, in MPR (miles per reindeer) of Sleigh-One’s reindeer-powered output of Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blixen, and, of course, because of his bright, shiny red nose, Rudolph, the “Red-Nosed Reindeer,” is officially designated by Santa as “Reindeer One.”

Sleigh-One also features another display that shows the number of presents delivered along with cup holders Santa and Mrs. Claus (when traveling with him) use to hold their eggnog.

The telemetry data it receives from the CCC maps and computes the coordinates for every rooftop fireplace worldwide that Santa descends into to deliver presents.

Elendil explained if a home doesn’t have a chimney, Santa’s computer kicks into gear and activates the “back door” software program using magical algorithms for Christmas present deliveries.

Of course, there can be glitches.

One time, Elendil reported the software program filled a neighbor’s refrigerator with pickled herring and even filled a chimney with fruitcake! But now, he said, everything works perfectly.

When I was a child, one of my favorite Christmas TV specials was “Rudolph the Red-Nosed Reindeer,” which first aired back in 1964.

One highlight of the special was Christmas Eve when a major blizzard hit the North Pole.

Santa realized he could not navigate his reindeer sleigh through the storm, leading him to consider canceling Christmas.

However, a young reindeer named Rudolph had a very bright red nose that would, as Santa said, “cut through the murkiest storm they can dish up.”

That Christmas Eve, as the doors to the North Pole’s Toy Workshop opened, the heavy blowing snow rushed towards Sleigh-One, which was ready to deliver presents throughout the world.

Santa: “Ready, Rudolph?”

Rudolph: “Ready, Santa!”

Santa: “Well, let’s be on our way. OK, Rudolph. Full power!”

Returning to Jessica’s question, Elendil reported the North Pole’s supercomputer confirmed that there are approximately 131 million households in the U.S.

The world’s population is nearly 8 billion, with homes scattered across 196.9 million square miles of Earth’s surface.

Further calculations, taking into account densely populated areas, reveal an average distance of 0.1 miles between homes.

To deliver all the Christmas presents in a single night, Sleigh-One averages a cruising speed of nearly 1.3 million miles per hour!

Elendil shared the story of when Dasher asked Santa if the sleigh could travel the speed of light, which would be 186,000 miles per second and 671 million miles per hour.

Santa explained that if he traveled that fast, Rudolph’s nose light would trail behind the sleigh like a comet’s tail, and they would enter a time warp, traveling backward in time and delivering presents before Christmas.

Therefore, instead of reaching light speed, they used a specially designed sleigh equipped with magical transwarp-time drive capabilities.

Well, Jessica, I hope you found this story fun to read; I sure enjoyed writing it.

Christmas originates from the Old English phrase “Cristes maesse,” meaning “Christ’s Mass.” This phrase first appeared in historical documents around 1038 AD. It evolved into the Middle English “Christemasse” and the modern “Christmas.”

As we grow older and face the challenges life presents, it’s important to hold onto the magical memories of Christmas — even amid our challenges, there is still room for wonder and joy.

I wish you all a very Merry Christmas.

Finarfin Elendil, who moonlights as a freelance journalist with the North Pole Frosty newspaper.
I created the image using the Meta AI artificial intelligence program (AI Imagined), which 
generated the image based on my text prompts.

Friday, December 13, 2024

A bright idea: an electrically lit Christmas tree

© Mark Ollig

Up until the late 19th century, Christmas trees were decorated with garlands of popcorn, homemade ornaments, and edible treats like berries, nuts, cookies, and fruits.

Lighted candles were also used on Christmas trees, but due to fire hazards, a bucket of water was often kept nearby.

In 1879, Thomas Edison developed a practical incandescent light bulb at his Menlo Park, NJ, laboratory.

By the Christmas season of 1880, he was demonstrating his electric lighting system, stringing bulbs outside his lab for the nearby railroad passengers to see.

The New York Times reported Dec. 21, 1880, that New York City officials visited Thomas Edison’s Menlo Park laboratory, where they were impressed by his electric lighting. The article noted a walkway illuminated with 290 electric lamps, “which cast a soft and mellow light on all sides.”

In December of 1882, Edward H. Johnson, an associate of Thomas Edison and the vice president of the Edison Electric Illuminating Company, created the first electrically illuminated Christmas tree.

He hand-wired a string of 80 red, white, and blue electric light bulbs on the tree in the parlor room of his home.

Johnson placed the illuminated Christmas tree on a slowly rotating platform (powered by a direct-current electric dynamo generator) next to a window so the tree was visible from the street.

After visiting Johnson’s home on West 12th Street and seeing his electrically lit Christmas tree, William Augustus Croffut, a journalist for the Detroit Post and Tribune, wrote a Dec. 22, 1882, newspaper article saying, “Last evening, I walked over beyond Fifth Avenue and called at the residence of Edward H. Johnson, vice-president of Edison’s electric company. There, at the rear of the beautiful parlors, was a large Christmas tree, presenting a most picturesque and uncanny aspect.”

He went on to describe the electrical lights on the Christmas tree, “It was brilliantly lighted with many colored globes about as large as an English walnut and was turning some six times a minute on a little pine box. There were 80 lights in all encased in these dainty glass eggs, and about equally divided between white, red, and blue.”

In the following years, Johnson continued to refine his Christmas tree display, increasing the number of lights.
An 1884 New York Times article said of his Christmas tree, “It stood about six feet high, in an upper room, and dazzled persons entering the room. There were 120 lights on the tree, with globes of different colors.”

Despite the public’s fascination with Johnson’s colorful bulbs, the transition from traditional candles on Christmas trees was gradual, slowed by high costs and limited access to electricity in smaller cities and rural areas.

In 1895, during President Grover Cleveland’s presidency, electric lights were placed on the indoor White House tree, which helped popularize the practice of decorating Christmas trees with electricity.

The Dec. 6, 1901, issue of the Brooklyn Daily Times featured an advertisement from the Edison Electric Illuminating Co. of Brooklyn offering miniature electric lamps for Christmas tree lighting, available for purchase or rental. The ad states wiring could be easily arranged if there was electric service in the building.

In 1903, General Electric, which had purchased Edison’s lightbulb factory in 1890, began selling the first Christmas tree lighting kits. These kits included a string of eight varied-colored glass bulbs and a connector for an electrical socket.

The kit cost $12 (about $430 today) and was marketed as a safer, easier way to light a Christmas tree, though the bulbs still posed some fire risk due to their heat.

In 1917, 15-year-old Albert Sadacca, along with his 22-year-old brother Leon and younger brother Henri, began marketing affordable electrically-powered Christmas lights through their family’s Ever-Ready Light Company.

The Nov. 28, 1920, Minneapolis Journal newspaper featured a Peerless Electrical Company advertisement, which advertised a “new kind of tree lighting set” from General Electric called the GE Christmas Arborlux, featuring smaller translucent lamp bulbs.

President Calvin Coolidge turned on the switch of the first officially recognized outdoor national Christmas tree Dec. 24, 1923.

The 48-foot balsam fir, trimmed with 2,500 red, white, and green bulbs, was lit on the Ellipse located south of the White House.

The ceremony drew more than 6,000 visitors and featured Christmas carols and a performance by the US Marine Band.

The lighting of the first national Christmas tree marked the beginning of a tradition that continues to this day.

Today’s Christmas lights mainly feature LEDs (light-emitting diodes), which are safer and more energy-efficient than older incandescent bulbs, like the cone-shaped glass ones I remember from my youth.

In the future, “smart” Christmas trees may feature lights enhanced by nanotechnology, bioluminescence, holographic projections, and immersive AI-augmented reality powered by excess heat in a room using energy-harvesting technology.

They will no doubt provide a unique and immersive Christmas experience.

Stay tuned.


Friday, December 6, 2024

The mysterious miniature ‘space shuttle’

© Mark Ollig

The X-37B is an uncrewed spaceplane shrouded in secrecy with a design resembling a miniature NASA Space Shuttle.

NASA initiated the X-37 program in 1999, aiming to develop a reusable space transportation system.

This program encompassed the X-37B, a small, autonomous spacecraft designed for operation in low Earth orbit.

In 2004, the X-37B program was transferred from NASA to the US Air Force. 

Today, operational control of the X-37B resides with the US Space Force, which assumed responsibility in 2020.

This American-made X-37B is a reusable orbital spaceplane that is 29 feet in length, 15 feet in wingspan, and 9.5 feet in height.

Its weight remains officially classified, although various sources suggest it is around 11,000 pounds.

The X-37B is designed for vertical takeoffs and is launched encased within a protective fairing shell atop a rocket. 

After completing a mission, the spaceplane lands on a runway, similar to NASA’s space shuttle.

The US Air Force’s official website states that the X-37B is a reusable experimental spacecraft for conducting orbital experiments and advancing future US space technologies.

The payloads and experiments conducted using the X-37B are mainly classified.

The X-37B’s designation follows the US military’s experimental aircraft naming convention. The “X” denotes experimental, “37” is a sequential number, and “B” indicates the second iteration.

Note that not all numbers in the one through 36 sequence correspond to actual aircraft due to canceled projects and designation changes.

The Orbital Test Vehicle-1 (OTV-1), also known as USA-212, had its first flight as the Boeing X-37B when it was launched aboard an Atlas V 501 rocket from Cape Canaveral, FL.

The Atlas V 501 rocket, operated by United Launch Alliance, stood at 191.3 feet tall, had a diameter of 12.5 feet, and weighed approximately 1.3 million pounds.

The rocket’s RD-180 liquid-fueled engine produced 860,000 pounds-force of thrust.

The Atlas V 501’s Centaur upper stage was powered by an RL10A-4-2 liquid-fueled engine, producing 22,300 pounds-force of thrust.

The X-37B’s first mission, lasting 224 days, tested technologies such as guidance, navigation, and control systems; thermal protection; satellite sensors; autonomous orbital operation; and re-entry and landing capabilities.

The X-37B reportedly operated in a low earth orbit at an approximate altitude range of 150 to 500 miles above the planet.

The X-37B contributes to Space Domain Awareness (SDA), which entails monitoring space activities and objects, including satellites and debris, that could impact US operations.

SDA helps the Space Force identify potential orbital threats and ensure safe operations for US spacecraft and satellites.

The X-37B operates in low Earth orbit, gathering data on satellites, debris, and other space activities. 

Its secrecy is driven by national security, the pursuit of technological advantage, and protection against adversaries.

General information about launches, non-classified orbital details, and landings is typically made public; however, specific mission objectives, payload details, and most experiment specifics remain confidential.

The X-37B tests various new space technologies, including advanced reusable spacecraft systems, autonomous navigation and control systems, and novel propulsion technologies.

Today, the Air Force’s Rapid Capabilities Office is involved in the program’s development, while the US Space Force oversees on-orbit operations.

The launch of the X-37B OTV-6 mission was May 17, 2020, from Cape Canaveral Space Force Station, FL, aboard an Atlas V 501 rocket.

A service module was introduced on the X-37B with the OTV-6 mission, enabling the spacecraft to carry out more experiments, store additional fuel, increase the spacecraft’s orbital range and duration, and support complex maneuvers.

The module also facilitates payload deployment and retrieval, hosting experiments like the Naval Research Laboratory’s Photovoltaic Radiofrequency Antenna Module and the US Air Force Academy’s FalconSat-8.

After spending 908 days in orbit, the X-37B landed at NASA’s Kennedy Space Center Nov. 12, 2022, ending the OTV-6 mission.

The seventh mission of the X-37B, OTV-7 (Orbital Test Vehicle-7) spacecraft launched on a SpaceX Falcon Heavy rocket, designated USSF-52, from Cape Canaveral Space Force Station Dec. 28, 2023.

The choice of the Falcon Heavy rocket for this mission may have been needed due to the X-37B’s increased payload capacity to carry more fuel, conduct more experiments, or achieve higher orbits.

The US Air Force announced that the X-37B began a series of aerobraking maneuvers Oct. 10 of this year.

These maneuvers are used to modify the spacecraft’s orbit by slowing it down with Earth’s atmosphere, which saves fuel and enables extended missions.

The secretive miniature ‘space shuttle,’ the X-37B, is currently in Earth orbit. It will continue its mission before eventually de-orbiting and returning to Earth, as it has done successfully in its previous six missions.

While many details about the X-37B remain classified, it continues to intrigue and generate speculation.

The X-37B remains a classified conundrum wrapped in an enigma.

After eight months in space, a Chinese “reusable experimental spacecraft,” landed in the Gobi Desert region located in northern China Sept. 6.

While not officially confirmed to be a direct response to the X-37B secretive spaceplane program, it does seem to signal a strategic response to advancements in US spaceplane technology.




Friday, November 29, 2024

The ‘Early Bird’ still soars

© Mark Ollig


In October 1945, science fiction writer Arthur C. Clarke foretold the use of space satellites in an article titled “Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?”

He wrote it for Wireless World, an engineering and technology magazine published in the United Kingdom.

Clarke proposed a system of “extra-terrestrial relays” and used the term “space stations” to describe the concept of artificial satellites orbiting the Earth for communication purposes.

He proposed positioning three satellites 26,000 miles above Earth to provide continuous global radio coverage.

Each satellite would act like a communication relay. It would pick up signals from anywhere within its coverage area on Earth and then broadcast those signals to other locations within its hemisphere.

Clarke also addressed the need for powerful rockets to place these satellites into orbit, stating, “The development of rockets sufficiently powerful to reach orbital, and even [earth gravitational] escape velocity is now only a matter of years.”

President Kennedy signed the Communications Satellite Act Aug. 31, 1962, leading to the creation of the Communications Satellite Corporation (COMSAT), which the US Congress authorized to establish a global commercial satellite communication system.

In 1964, COMSAT played a pivotal role in the establishment of the International Telecommunications Satellite Organization (INTELSAT) to oversee communications satellites for providing telephone, television, and data transmission services on a global scale.

NASA successfully launched the INTELSAT 1 F-1 satellite, named INTELSAT 1, atop a three-stage Delta rocket from Complex 17A at Cape Kennedy, FL, April 6, 1965.

It was nicknamed “Early Bird” from the saying, “The early bird catches the worm.”

It was the first satellite launched and operated by an intergovernmental consortium called INTELSAT, founded in 1964.

Hughes Aircraft Company built INTELSAT 1 for COMSAT, which was the first commercial communications satellite in geosynchronous orbit.

At an altitude of 22,300 miles, the satellite, spinning at 152 revolutions per minute, was positioned over the equator at 28° west longitude in a synchronous equatorial orbit over the Atlantic Ocean.

Early Bird matched Earth’s orbital speed, allowing it to hover above the planet.

Ground stations adjusted their antennas for a direct line of sight to the satellite, ensuring uninterrupted data transmission between North America and Europe.

This 85-pound, cylindrical satellite (28 inches in diameter, 23 inches tall) used solar cells to power its electronics, which included two six-watt transponders operating on a 50 MHz bandwidth.

The Early Bird could handle 240 simultaneous transatlantic phone calls, telegraph and facsimile transmissions, and television broadcasts.

Early Bird transmitted television coverage of the Gemini 6 spacecraft splashdown Dec. 16, 1965, with astronauts Thomas Stafford and Walter Schirra onboard.

I first became curious about the Early Bird satellite while watching a YouTube video of heavyweight boxing champion Muhammad Ali fighting Cleveland Williams Nov. 14, 1966.

“I’d like at this time to compliment the thousands of people in the United Kingdom, who, where it is nearly four-o’clock, are jamming the theaters over there to see our telecast via the Early Bird satellite,” announced boxing commentator Don Dunphy.

The Early Bird satellite used one channel to broadcast television programs between the two continents, ushering in a new era of live international television.

The phrase ‘live via satellite’ emerged during this era of live trans-Atlantic televised broadcasts via satellite.

The success of Early Bird proved the practicality of using synchronous orbiting space satellites for commercial communications.

Early Bird ceased operation in January 1969; however, it was reactivated in July of that year when a communications satellite assigned to the Apollo 11 moon mission failed.

In August 1969, the INTELSAT 1-F1 satellite, the Early Bird, was deactivated.

In 1990, INTELSAT briefly reactivated Early Bird to commemorate the satellite’s 25th anniversary.

According to NASA, as of today, “Early Bird is currently inactive.”

An INTELSAT video of Early Bird’s April 6, 1965, launch from Cape Canaveral, FL, can be seen at https://tinyurl.com/y4uvqn2s.

In the video, look for two famous Minnesotans: Hubert Humphrey, vice president of the United States, then chairman of the National Aeronautics and Space Council, and Sen. Walter Mondale, who witnessed (via close circuit TV) the rocket launch of the Early Bird.

LIFE magazine had an article about the satellite May 7, 1965, cleverly titled, “The Early Bird Gets the Word.”

In 1965, the word was the Early Bird, which reminds me of the Minneapolis garage band The Trashman’s 1963 hit “Surfin’ Bird” with its lyrics “A-well-a don’t you know about the bird? Well, everybody knows that the bird is a word.”

Arthur C. Clarke’s 1945 article can be read at:https://bit.ly/Clarke1945.

The full 68-page October 1945 Wireless World magazine can be read at: https://bit.ly/3YYc48r.

The 59-year-old Early Bird satellite still soars approximately 22,300 miles above us in a geosynchronous orbit. Its International Designator Code is 1965-028A.

The satellite tracking website n2yo.com shows the Early Bird’s location in real-time; see it at https://bit.ly/3gLNTSB.

In 2025, INTELSAT will be donating a full-sized replica of the INTELSAT 1 satellite, aka the Early Bird, to the Smithsonian’s National Air and Space Museum in Washington, DC.



Friday, November 22, 2024

AI engines: GPUs and beyond

© Mark Ollig

Founded on April 5, 1993, NVIDIA Corp. is headquartered in Santa Clara, CA.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry.

Initially, NVIDIA focused on developing graphics processing units (GPUs) for the computer gaming market.

Supercomputing graphics cards rely on specialized electronic circuits, primarily the GPU, to perform a large number of calculations.

These circuits act as the “computing muscle,” allowing the card to handle complex graphics rendering, AI processing, and other workloads.

GPUs execute various computing programs, specifically complex calculations, and accelerate the processing of applications with high graphical processing needs and videos.

Graphics cards are used for computer gaming, self-driving cars, medical imaging analysis, and artificial intelligence (AI) natural language processing.

The growing demand for large language AI models and applications has expanded NVIDIA’s graphics card sales.

The NVIDIA H200 Tensor Core GPU datasheet states it is designed for generative AI and high-performance computing (HPC). It is equipped with specialized processing units designed to enhance performance in AI computations and matrix operations.

It is a powerful graphics card designed to enhance AI and high-performance computing (HPC) tasks.

The NVIDIA H200 Tensor Core GPU features 141 GB of High-Bandwidth Memory 3 Enhanced (HBM3e), ultra-fast memory technology enabling rapid data transfer of large AI language models and scientific computing tasks.

The H200 is the first graphics card to offer HBM3e memory, providing 1.4 times more memory bandwidth compared to the H100 and nearly double the data storage capacity.

It also has a memory bandwidth of 4.8 terabytes per second (TB/s), which can transport large amounts of data over a network.

This memory bandwidth significantly increases the computing capacity and performance, improving scientific computing to allow researchers to work more efficiently.

It is based on NVIDIA’s Hopper architecture, which enhances GPU performance and efficiency for AI and high-performance computing workloads. It is named after computer scientist Grace Hopper.

AI uses “inference” to understand new information and make decisions. To do this quickly, AI can use multiple computers in the “cloud” (computers connected over the internet).

The H200 boosts inference speed to twice the levels as compared to H100 graphics cards for large language models like Meta AI’s Llama 3.

The higher memory bandwidth ensures faster data transfer, reducing bottlenecks in complex processing tasks.

It is designed to process the increasingly large and complex data processing needs of modern AI technology.

As these tasks become more complex, GPUs need to become more powerful and efficient. Researchers are exploring several technologies to achieve this.

Next-generation memory systems, like 3D-stacked memory, which layers multiple memory cells, will enhance computing data transfer speeds.

High-Performance Computing (HPC) leverages powerful computers to solve complex challenges in fields like scientific research, weather forecasting, and cryptography.

Generative AI is a technology for creating writing, pictures, or music. It also enhances large language models, those capable of understanding and creating text content resembling human origination.

Powerful GPUs generate significant heat and require advanced cooling.

AI optimizes GPU performance by adjusting settings, improving efficiency, and extending their lifespan. Many programs use AI to fine-tune graphics cards for optimal performance and energy savings.

Quantum processing uses the principles of quantum mechanics to solve complex problems that are too difficult to address using traditional computing methods.

Neuromorphic computing, represented by its spiking neural networks, seeks to duplicate the efficiency and learning architectures inspired by the human brain.

As GPUs push the limits of classical computing, quantum computing is emerging with QPUs (quantum processing units) at its core.

QPUs use quantum mechanics to solve problems beyond the reach of even the most powerful GPUs, with the potential for breakthroughs in AI and scientific research.

Google Quantum AI Lab has developed two quantum processing units: Bristlecone, with 72 qubits, and Sycamore, with 53 qubits.

While using different technologies, QPUs and GPUs may someday collaborate in future hybrid computing systems, leveraging their strengths to drive a paradigm shift in computing.

Google’s Tensor Processing Units (TPUs) specialize in deep learning tasks, such as matrix multiplications.

Other key processors fueling AI computing include Neural Processing Units (NPUs), which accelerate neural network training and execution, and Field-Programmable Gate Arrays (FPGAs), which excel at parallel processing and are essential for customizing AI workloads.

Additional components utilized for AI processing are Application-Specific Integrated Circuits (ASICs) for tailored applications, System-on-Chip designs integrating multiple components, graphics cards leveraging GPU architecture, and Digital Signal Processors (DSPs).

These components are advancing machine learning, deep learning (a technique using layered algorithms to learn from massive amounts of data), natural language processing, and computer vision.

GPUs, TPUs, NPUs, and FPGAs are the “engines” fueling the computational and processing power of supercomputing and artificial intelligence.

They will likely be integrated with and work alongside quantum processors in future hybrid AI systems.

I still find the advancements in software and hardware technology that have unfolded throughout my lifetime incredible.
I used Meta AI's large language model program
(powered by Llama 3) and its text-to-image generator
 (AI Imagined) to create the attached image of two
 people standing in front of a futuristic "AI Quantum Supercomputer."
The image was created using my text input, and the AI created the
image with no human modifications. 






Friday, November 15, 2024

Accelerating the future: supercomputing, AI, part one

© Mark Ollig


Intel Corp. (INTC), founded in 1968, has been the major semiconductor sector representative on the Dow Jones Industrial Average (DJIA) since 1999, that is, until this month.

The DJIA, established May 26, 1896, with 12 companies, is today an index tracking the stock performance of 30 major companies traded on the US stock exchange.

In 1969, Intel partnered with Busicom, a Japanese calculator manufacturer founded in 1948, to develop a custom integrated circuit (IC) for its calculators.

This contract led to the creation of the Intel 4004, the first commercial microprocessor to be used in 1971 for Busicom’s 141-PF calculator; the 4004 combined the central processing unit (CPU), memory, and input and output controls on a single chip.

By integrating transistors, resistors, and capacitors, ICs like the Intel 4004 revolutionized the electronics industry, enabling the design of smaller, more powerful devices. Intel designs and manufactures ICs, including microprocessors, which form the core of a CPU.

Today’s Intel CPUs are supplied to computer manufacturers, who integrate them into laptops, desktops, and data centers for cloud computing and storage.

The Dow Jones recently replaced Intel with NVIDIA (NVDA); a technology company founded in 1993.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry (interestingly, their green logo hints at this, symbolizing the ‘green with envy’ competitors).

NVIDIA has outpaced Intel in the area of artificial intelligence (AI) hardware; however, Intel has strengthened its AI capabilities with their 13th Gen Core Ultra processor series and Gaudi 3.

Gaudi 3 is a data center-focused AI accelerator designed for specialized workloads such as processing AI data in large-scale computing centers and AI high-level language training.

The DJIA has clearly recognized this shift, highlighting NVIDIA’s growing influence in AI via its graphics processing units (GPU) technology.

The GPU functions as the computing muscle executing tasks, specifically complex calculations and graphic processing necessary for graphics-heavy applications.

Mostly known for designing and manufacturing high-performance GPUs, NVIDIA released its first GPU, NV1, in 1995, and the RIVA 128 in 1997, and the GeForce 256, in 1999.

GPUs are processors engineered for parallel computation, excelling in tasks that require simultaneous processing, such as rendering complex graphics in video games and editing high-resolution videos.

Their architecture also makes them especially suited for AI and machine learning applications, where their ability to rapidly process large datasets substantially reduces the time required for AI model training.

Originally focused on graphics rendering, GPUs have evolved to become essential for a wide range of applications, including creative production, edge computing, data analysis, and scientific computing.

NVIDIA’s RTX 4070 Super GPU, priced between $500 to $600, targets mainstream gamers and content creators seeking 4K resolution.

For more complex workloads, the RTX 4070 Ti Super GPU, priced at around $800, offers higher performance for computing 3D modeling, simulation, and analysis for engineers and other professionals.

Financial experts use GPU technology for analyzing data used in monetary predictions and risk assessments to make better decisions.

Elon Musk launched his new artificial intelligence company, “xAI,” March 9, 2023, with the stated mission to “understand the universe through advanced AI technology.”

The company’s website () announced the release of Grok-2, the latest version of their AI language model, featuring “state-of-the-art reasoning capabilities.”

Grok is reportedly named after a Martian word in Robert A. Heinlein’s science fiction novel “Stranger in a Strange Land,” implying a deep, intuitive understanding – at least he did not name it HAL 9000 (2001 A Space Odyssey), or the M-5 multi-tronic computing system (Star Trek).

Colossus employs NVIDIA’s Spectrum-X Ethernet networking platform, providing 32 petabits per second (Pbps) of bandwidth to handle the data flows necessary for training AI large language models.

It also uses 100,000 NVIDIA H100 GPUs (the H is dedicated to Grace Hopper, a pioneering computer scientist), with plans to expand to 200,000 GPUs, which includes the new NVIDIA H200 models.

The NVIDIA H100 GPU has 80 GB of HBM3 high-bandwidth memory and 3.35 terabytes per second (TB/s) of bandwidth.

The NVIDIA H200 GPU, released in the second quarter of 2024, has 141 GB of HBM3e (extended) memory and 4.8 TB/s bandwidth.

It provides better performance than the H100 in some AI tasks, with up to a 45% increase, has two times faster inference speeds (measures how quickly an AI processes new information and generates results) on large language models, and uses less energy than the H100.

Both the H100 and H200 GPUs are based on the Hopper architecture.

Be sure to read next week’s always-exciting Bits and Bytes for part two.


NVIDIA HGX H200 141GB 700W 8-GPU Board













Elon Musk’s ‘Colossus’ AI training system
 with 100,000 Nvidia chips
(GPU module in the foreground)


Friday, November 8, 2024

The web’s early threads

© Mark Ollig

The internet, our foundational digital network, is similar to the paved highways connecting various destinations.

The web is an application that runs on top of the internet, much like the cars that travel over our highways.

The web operates on top of the internet’s underlying infrastructure using a separate layer of software and protocols (the overlay structure and operability) to enable website interactions.

British computer scientist Tim Berners-Lee presented “Information Management: A Proposal,” March 12, 1989, a recommendation for a distributed hypertext system to his colleagues at the European Organization for Nuclear Research, or CERN, in Geneva, Switzerland.

Tim Berners-Lee did not conceive the idea of hypertext; earlier ideas and developments influenced his work.

In 1945, Vannevar Bush, an American engineer who led the US Office of Scientific Research and Development during World War II, said, “Consider a future device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.”

“As We May Think,” an essay Bush wrote and published in July 1945 in The Atlantic Monthly magazine, describes a personalized information system, an electromechanical device called “Memex” that stores and links information; at that time, the technology was not yet available to build it.

Although never constructed, Memex, a theoretical concept, influenced the development of hypertext and personal computing.

In 1960, Nelson began Project Xanadu to develop a user-friendly way for cross-referencing documents electronically with computers; however, due to technical and coordinating issues, it was never built.

In 1965, Nelson coined the terms “hypertext” and “hypermedia,” as methods for instantly interconnecting reading documents via active links. Hypertexting links words, images, and concepts to related information. For example, while reading a website about coffee, you can click a link to see a map of where beans come from or learn how espresso is made.

During the 1960s, Douglas Engelbart developed the oN-Line System (NLS), which used hypertext-like linking computer coding and introduced graphical computing elements that influenced the development of graphical user interfaces by Xerox and Apple.

“When I first began tinkering with a software program that eventually gave rise to the idea of the World Wide Web, I named it Enquire, short for “Enquire Within Upon Everything,” a musty old book of Victorian advice I noticed as a child in my parents’ house outside London,” Tim Berners-Lee wrote on the first of 226 pages in his book “Weaving the Web” from 1999, which I own.

Berners-Lee said the title, “Enquire Within Upon Everything,” was suggestive of magic, and that the book was a portal to a world of information.

“In 1980, I wrote a program, Enquire, for tracking software in the PS [Proton Synchrotron] control system. It stored snippets of information and linked related pieces. To find information, one progressed via links from one sheet to another,” Berners-Lee said.

During his March 12, 1989, presentation, Berners-Lee diagrammed a flowchart showing CERN network users distributing, accessing, and collaborating on electronic files. Electronic documents would be viewed and modified regardless of the computer model or operating system.

He proposed a generic client user-friendly interface, a “browser” software for a computer user to interact with hypertext data servers.

Berners-Lee compared hypertext to a phone directory, with links connecting information about people, places, and categories using a system where users could access a single database via a distributed file system.

He developed the protocols for linking databases and storing documents across a network.

Tim Berners-Lee and his colleague Robert Cailliau, a Belgian informatics engineer who in 1987 proposed a hypertext system for CERN, collaborated to introduce the phrase “WorldWideWeb” Nov. 12, 1990.

A basic form of the WorldWideWeb software was operating at CERN by Dec. 25, 1990.

Berners-Lee wrote the initial code for Hypertext Transfer Protocol (HTTP), Hypertext Markup Language (HTML), and the first web browser, which he called “WorldWideWeb,” as a tool for sharing information at CERN. He used a NeXTcube computer workstation.

During 1989 to 1991, he designed the World Wide Web, creating HTTP, URL, and HTML coding technologies to allow linked document sharing over the internet.

Public computer users outside CERN could use Berner-Lee’s WorldWideWeb hypertext software over the internet starting Aug. 6, 1991.

In 1993, the internet primarily connected universities, private companies, and government computers.

In late 1992, the Mosaic web browser was being developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. Marc Andreessen and Eric Bina are often credited as the lead developers and driving forces behind it.

The NCSA Mosaic, version 1.0, could be freely downloaded from the NCSA website April 22, 1993.

I remember using NCSA Mosaic on my HP OmniBook 300 laptop computer. Mosaic featured icons, bookmarks, pictures, and a friendly user interface integrating multimedia with text and graphics.

The web, an overlaying program on the internet, operates in harmony with the internet protocols created by Vint Cerf and Robert Kahn.

Tim Berners-Lee’s first hypertext website can be seen at https://bit.ly/48AtVXt.

Today, we are witnessing the World Wide Web evolving. Its threads between information, social media, business, government, and e-commerce sites are rapidly merging with artificial intelligence, so hang on, folks!

A snapshot of the 1993 NCSA Mosaic web browser in use.
(Photo Public Domain, by Charles Severance)


Friday, November 1, 2024

Remember to cast your ‘ballotta’

© Mark Ollig


The word “election” originated in the 13th century from the Anglo-French language, meaning “choice” or “selection.”

Ballotta is the Italian word for the “little ball” used in voting, from which the English word “ballot” is derived.

In the late 1500s, the people of Venice, Italy, would register their vote (in secret) by dropping a specifically marked or colored ball (ballot) into a container. The different colors or markings on the balls represented a particular vote or candidate. The balls were then counted to determine the winning choice.

Around 508 BC, Athens created a system to protect their democracy from tyrants and prevent anyone from gaining excessive power.

Citizens could write the name of someone they believed threatened the stability and well-being of the entire community on a piece of pottery called an ostrakon. If enough votes were cast, the person faced exile. This practice, known as ostracism, is where the word “ostracize” originates.

In 1856, William Boothby, then a sheriff of Adelaide in southern Australia, developed the Australian ballot method.
This method used secret paper ballots on government-issued forms printed with candidates’ names. Voters cast their ballots in an enclosed booth and placed them in a secure ballot box for hand-counting.

Boothby’s Australian secret ballot method spread to Europe and the United States, where voters in Massachusetts first used it during a US presidential election Nov. 6, 1888.

Early voting practices in our country often involved publicly declaring one’s chosen candidate aloud or writing their name on paper, often in front of others. This practice is known as non-secret voting.

Citizens using this “non-secret” ballot method were sometimes intimidated, coerced, or bribed monetarily to cast their vote for a particular candidate.

In 1888, Massachusetts and New York adopted the Australian ballot system statewide, which included government-printed ballots listing all the candidates, rather than having voters write in names or use ballots provided by political parties.

Privacy for ballot marking was ensured using compartmental booths or tables with partitions.

The Minnesota Election Law of 1891 required the use of the Australian ballot system for all general elections.

The January 1891 general statutes of the State of Minnesota says, “The election law of 1891, bringing the entire state under the so-called Australian system of voting in general elections, imposes important duties upon this office, also upon each and every town board and town clerk, all of which must be performed in proper order to secure a valid election.

Under section 44 of said law, each election district must be provided with three ballot boxes for voting, one ballot box painted white, one painted blue, and one painted black. There shall also be provided in each election precinct, two voting booths for every hundred electors registered. There shall also be provided an indelible pencil for each voting booth.”

Our use of the term “voting booth” likely originated from the name William Boothby, although this is not definitively proven. Back in the 19th century, a booth was also considered an enclosed space for animals inside a barn.

Minnesota used the Australian system of voting for the 1892 US presidential election.

A Minneapolis Times newspaper article from Nov. 15, 1892, titled, “Comment on the Australian Ballot System of counting” stated, “The Australian ballot law has its limitations, and those who’ve worked closely with it, like election judges, generally agree that while it’s effective in preventing illegal voting and ensuring ballots are cast secretly, it falls short when it comes to counting those ballots.”

Jacob Hiram Myers (1841 to 1920) obtained US Patent 415,549, titled “Voting Machine,” Nov. 19, 1889, which was the first mechanical lever voting machine.

In 1890, he founded Myers American Ballot Machine Company, and his voting machines were first used in Lockport, NY, in 1892 for a town election.

Unfortunately, Myers’ voting machines encountered significant problems during the Rochester, NY election of 1896, after which his company closed.

By the 1930s, improved models of mechanical lever voting machines were being used in many US cities; however, they were subjected to various problems, including being tampered with, and by 1982, most US production of these machines had ended.

Reading ballots using an optical mark-sense scanning system was first used in 1962 in Kern City, CA.

The Norden Division of United Aircraft and the City of Los Angeles designed and built this ballot reading method, which was also used in Oregon, Ohio, and North Carolina.

In the 1964 presidential election, voter jurisdictions in two states, California and Georgia, used punch cards and computer tabulation machines.

The 2000 presidential election is remembered for Florida’s punch card ballots and their “hanging chads” recount.

US Patent 3,793,505 was granted for “the Video Voter” Feb. 19, 1974. The abstract described it as “An electronic voting machine including a video screen containing the projected names of candidates or propositions being voted.”

The video voter was used in Illinois in 1975 and is considered the first direct-recording electronic voting machine used in an election.

Today, Minnesota ballot tabulators use optical scanner equipment to read and record the ballot vote for each candidate. Companies providing the state’s voting equipment include Dominion Voting Systems, Election Systems & Software (ES&S), and Hart InterCivic (Hart Verity).

Be sure to exercise your right to vote.

im


image depicting a New York polling place from 1900 showing voting booths on the left. The image is public domain and is from the 1912 History of the United States, volume V. Charles Scribner's Sons, New York.


Friday, October 25, 2024

My ‘additive manufacturing’ journey

© Mark Ollig


3D printing, also known as additive manufacturing, creates physical objects from digital files. 

These files can be designed with Computer-Aided Design (CAD) software or found online.

Materials like plastics and metals are used to make physical objects/models, built layer by layer with a 3D printer.

Recently, my two youngest sons gave me a Bambu Lab A1 mini 3D printer as a birthday gift. 

The second oldest son is a 3D printing enthusiast and has printed some models for me, like the NASA Viking 1 lander and the James Webb Space Telescope. 

During the COVID-19 pandemic, he was printing sturdy casings to hold the filter used with N-95 masks. 

He also printed an incredibly realistic miniature of the moon’s surface using artificial moon dust called regolith. This mini moonscape now serves as the landing spot for my model of the Apollo 11 lunar module.

The Bambu printer came out of the box and was nearly fully assembled. The printer fits nicely on the wooden stand that once held my Xerox laser printer, which I had given to my oldest son after purchasing a new HP model.

The Bambu Handy software application allows me to control the 3D printer directly from my smartphone or laptop. 

Today’s computing landscape is all about apps and cloud-based programs, a stark contrast to the floppy disk days of yesteryear.

According to Bambu Lab’s website, their A1 mini 3D printer weighs 12.2 pounds and measures 13.7 inches high, 12.4 inches wide, and 14.4 inches deep. 

The build volume, or maximum size of the object model it can print, is 7.1 by 7.1 by 7.1 inches.

I turned on the 3D printer, installed its app, connected the printer to my internet router’s Wi-Fi, and registered with Bambu Lab.

Bambu Lab’s 3D printers use custom, non-open-source computing firmware, reportedly a Linux-based operating system. Two popular open-source firmware options for 3D printers are Marlin, created in 2011, and Klipper, developed in 2016.

I loaded the Polymaker spool of 1.75 mm (0.069-inch) polylactic acid (PLA) filament onto the 3D printer’s spool holder. PLA is a type of biodegradable plastic. The spool, on which 1,082 ft of filament is rolled, weighs 2.2 lb and is made of recycled cardboard. 

Next, I threaded the Savannah Yellow-colored filament into the polytetrafluoroethylene (PTFE) guide tube, which led to the printer’s hardened steel extruder.

The Bambu Lab A1 has four stepper motors, one of which powers the extruder, which draws the filament into the nozzle within the tool head, where it is heated from 374 to 446 °F. The printer is capable of reaching temperatures up to 572 °F.

Calibration of the 3D printer involves leveling its build plate and adjusting nozzle height, filament flow, temperature, and belt tension to ensure accurate and reliable layer printing at speeds up to 19.7 inches per second. 

The dynamic flow control program ensures the 3D printer dispenses the correct amount of plastic filament.

I used the app to connect to Bambu Lab’s cloud servers, where I chose a digital model from their library. 

To evaluate the printer’s performance, I printed a 3D Benchy tugboat.

This highly detailed tugboat is a standard test for 3D printers. It helps to see how well the printer can replicate complex features like curves, small details, and inclined planes.

I trimmed the filament tip and threaded it through the tube until it reached the extruder, which feeds and controls the flow of melted plastic to build each layer of a 3D print. 

I then tapped the “Load” icon on the color touchscreen at the front of the 3D printer. 

The extruder smoothly pulled the filament through the PTFE tube and into the hotend of the tool head, where it would be melted for printing my model.

I then saw part of the yellow filament emerging from the nozzle, which meant the printer was ready.

The 3D printer began extruding the heated, melted plastic filament, following the digital file instructions to build the tugboat layer by layer on the build plate.

The app provides a live video feed of the tugboat’s construction from the camera attached to the 3D printer. 

The 3D printer performed flawlessly, producing a robust yellow tugboat model with smooth lines and distinct features like a smokestack and windows. 

I was also impressed by how quietly the printer ran from start to finish. 

As this is Halloween season, I also printed a robotic-looking skeleton.

My son proposed a fitting analogy for 3D printing: Building a brick wall involves stacking layers of bricks, while a 3D printer builds objects in layers of plastic. 

I like this printer and consider it an incredible tool for exploring the possibilities of 3D printing on a personal scale.

Forty years ago, while working for the Winsted Telephone Co., I clearly remember unrolling copper-paired cable from a heavy wooden spool mounted on a trailer hitched to the company’s yellow 1965 Ford F-100 service/utility truck. 

These days, I am threading thin plastic filament from a lightweight recycled cardboard spool attached to a 3D printer.

 Perhaps tackling a 3D-printed model of that old ‘65 Ford telephone truck will be my next project.

Thank you for the great birthday present, boys.
Finished tugboat and robotic skeleton 3D printed and placed on the build plate of
the Bambu Lab A1 model printer.
(Photo by Mark Ollig)

Bambu Lab A1 mini 3D printer building the tugboat.
(photo by Mark Ollig)