Tweet This! :)

Friday, August 30, 2024

The visionaries shaping our digital world

© Mark Ollig


I came across an interesting YouTube video the other day.

It featured two people enjoying breakfast at an outdoor cafe while reading the news using a sleek, flat-screened computing tablet.

One of them held a stylus pen to interact with the display screen, tapping on it to pull up news stories and ads.

What’s unusual about this? Well, the video was recorded 30 years ago, in 1994 – 16 years before the first Apple iPad touchscreen tablet was released.

In 1994, Roger Fidler, a journalist and newspaper designer, produced a video showing how people would interact with a digital “electronic newspaper” in the future using a layout mimicking a print newspaper.

The video suggests transitioning to a digital format would allow readers to be able to “clip and save articles or send them electronically to a friend,” and for advertisers to reach a larger audience.

Fidler’s technological vision aligns with how we consume news and information today using various computing tablets, iPads, and smartphones.

In August 1972, computer scientist Alan C. Kay, while working at the Xerox PARC (Palo Alto Research Center) in Palo Alto, CA, authored an educational paper about an educational handheld tablet computer, the DynaBook, which encapsulates the idea of a “dynamic” and interactive digital “book.”

Kay included descriptions and diagrams of the DynaBook in the document “A Personal Computer for Children of All Ages.”

The paper acknowledges his ideas about personal computing aligned with education had been forming before his time at Xerox PARC, which he joined in 1970.

Kay’s DynaBook was to be a portable “carry anywhere” interactive tool for student learning.

One paragraph describing the means to process information on the DynaBook mentioned LSI (Large Scale Integration) chips, and near it “Intel 4004” had been handwritten into the paragraph, which led me to believe Kay wanted to use cutting-edge technology.

In 1971, the Intel 4004, a four-bit processor chip, was one of the first commercially available microprocessors.

He described the DynaBook personal computer as having a flat display screen covering its surface, with flexible input options, including both physical and virtual keyboards, and a touch screen interface.

Kay depicted the DynaBook’s keyboard as thin, with no moving parts, and sensitive to tactile pressure screen sensors for both input and output.

The DynaBook would have wireless communication capabilities and could network with other DynaBook’s and obtain information wirelessly from “centralized information storage units,” which seems to represent today’s cloud computing data servers.

Kay’s 1972 paper didn’t specifically define the exact ways of interacting with the DynaBook, like using one’s index finger for touch-based input, a stylus pen, or voice recognition; however, the diagrams and sentences in the paper suggest these methods.

The DynaBook diagram features “Files,” implying basic file management. The document suggests 8K memory as the minimum requirement, with 16K enabling advanced features.

Diagrams of the DynaBook show a rectangular device measuring 12 inches by 9 inches, 0.75 inches thick, and weighing less than four pounds.

The 1972 paper states, “A combination of this “carry anywhere” device and access to a global information utility such as the ARPA [Internet] network or two-way cable TV, will bring the libraries and schools of the world to the home.”

The 52-year-old document explores the possibility of the Dynabook using phase transition liquid crystals (liquid crystal display) for its display due to its low power requirements, image quality, and suitability for viewing in different lighting conditions.

The 1972 paper describes how the development of rechargeable battery technology would produce a DynaBook capable of operating for an extended duration.

Kay’s work was said to have influenced tech companies, including Apple and Microsoft.

From 1966 to 1969, the TV series Star Trek showed crew members obtaining information from rectangular electronic clipboards with flat display screens they would operate using a stylus pen.

The clipboard resembled a modern computing tablet, which, surprisingly, accurately depicted “the future” in the late 1960s.

From 1987 to 1994, crew members on the TV series “Star Trek: The Next Generation” used small, rectangular handheld computing devices with touchscreens.

Star Trek lore refers to them as PADDs (Personal Access Display Devices), with some having a sleek touchscreen and others containing illuminated square buttons.

PADDs reportedly inspired the development of real-world computing tablets, including the first Apple iPad, which was sold to the public April 3, 2010.

With features likely to impress Captains James T. Kirk and Jean-Luc Picard, this year’s Apple iPad Pro is a sleek, rectangular device measuring 11.09 inches by 8.48 inches and a half-inch thick. It has a 13-inch diagonal touch display and comes with a stylus called the Apple Pencil.

Alan Curtis Kay, born in Springfield, MA, is 84 years of age. His 1972 paper, “A Personal Computer for Children of All Ages,” can be read at https://bit.ly/46ZDiit.

Roger Fidler, born in Mount Vernon, WA, is 81. His 1994 13-minute video, “The Tablet Newspaper: a Vision for the Future,” can be seen at https://bit.ly/3SWi3II.
 
These are two of the visionaries who helped to shape our digital world.


























Alan Kay and the prototype of the Dynabook
 Nov. 8, 2008
(Wikimedia Commons)





Friday, August 23, 2024

From bits to petabits per second

© Mark Ollig


In 1958, the Bell 101 modem enabled vital data transmissions within the Semi-Automatic Ground Environment (SAGE), a large-scale computer system developed in the U.S. during the Cold War to coordinate and automate military air defense.

The modem weighed 25 lbs. and operated at 110 baud (110 bps) due to its modulation scheme, Frequency-Shift Keying, in which each signal change (baud) represents one bit of data.

In the 1980s, many of us used 1200 and 2400 bps modems to connect our personal computers to remote computing networks and bulletin board systems.

These modems connected to our computer’s RS-232 serial port and an RJ-11 modular connector cord plugged into a telephone line.

During the 1990s, various modem standards, including V.32 (9.6 Kbps), V.32bis (14.4 Kbps), V.34 (28.8 Kbps), V.34bis (33.6 Kbps), and V.90 (56 Kbps), established the connection between our modem and the ISP (Internet Service Provider) modem, enabling us to connect to the internet and explore the early web.

Remember the sounds of the high-pitched beeps, chirps, and whistles during the data “handshake” negotiating process with the ISP?

In 2000, the Federal Communications Commission (FCC) set a new broadband standard of at least 200 Kbps for download or upload speeds to adapt to the increasing demand for high-speed internet.

The 2000s also saw DOCSIS (Data Over Cable Service Interface Specification) 1.0/1.1 offering cable broadband speeds ranging from 5-15 Mbps (megabits per second) downstream and 1-5 Mbps upstream over existing coaxial cable TV infrastructure.

In addition to DOCSIS, ADSL (Asymmetric Digital Subscriber Line) technology allowed for simultaneous voice and internet use over existing telephone copper lines at speeds of 1-3 Mbps.

The maximum distance limit for ADSL was around 3.4 miles from the serving telephone office.

In 2001, the U.S. Census Bureau reported that in 2000, 4.4% of U.S. households had home broadband connections, while 41.5% relied on dial-up connections with speeds of 28.8 or 56 Kbps.

In 2003, cellular 3G (Third Generation) data download speeds averaged 1-2 Mbps, sufficient for basic mobile internet use at the time.

After 2003, telephone companies began installing ADSL2+ (Asymmetric Digital Subscriber Line 2 Plus), which bonded multiple copper cable pairs to provide average download speeds of 9 Mbps in urban areas and 6 Mbps in rural areas.

In 2009, the American Recovery and Reinvestment Act allotted $7.2 billion to expand broadband access and promote digital inclusion in underserved communities.

The grant money was used for fiber-optic networks to bridge the rural-urban “digital divide” caused by slower internet speeds in sparsely populated areas.

In 2010, the FCC updated the definition of broadband to require a minimum download speed of 4 Mbps and a minimum upload speed of 1 Mbps.

A 2013 report from the National Telecommunications and Information Administration (NTIA) revealed that the average global internet speed in 2010 was 2 Mbps, while the U.S. average was 4.7 Mbps.

By the early 2010s, cellular 4G (Fourth Generation) LTE (Long-Term Evolution) significantly increased mobile internet speeds, initially averaging around 6.5 Mbps.

On Sept. 15, 2010, the Electric Power Board (EPB) in Chattanooga, Tennessee, offered a groundbreaking 1 gigabit per second (Gbps) internet service to residents and businesses via its fiber-optic network.
At that time, 1 Gbps was considered incredibly fast, far beyond standard speeds offered by most internet service providers.

In 2015, the FCC updated its definition of broadband to a minimum of 25 Mbps download and 3 Mbps upload speeds.

The COVID-19 pandemic underscored the significance of universally accessible, reliable, high-speed broadband networks for remote work, education, healthcare, and commerce.

Verizon activated its commercial 5G cellular service in Minneapolis on April 11, 2019.

Using my FCC Speed Test app, I recently checked the speed of my mobile broadband connection from my Verizon 5G Ultra-Wideband cellphone while away from home and measured an 89.68
Mbps download speed.

At home, my Verizon 5G Ultra-Wideband internet gateway router reached an average download speed of 247 Mbps.

In 2023, the FCC estimated the average U.S. broadband speed to be 170-180 Mbps.

Minnesota’s Broadband Grant Program allocated $100 million for 2024 and 2025 to expand broadband access to approximately 8,900 unserved and underserved homes and businesses across the state.

What data speeds might we see in the future?

An international research team presented their groundbreaking research findings at the 47th International Conference on Optical Fiber Communications in San Diego on March 28th.

They achieved a record-shattering 402 Tbps data transmission rate after conducting an experiment using all six wavelength bands and advanced modulation techniques over 32.1 miles of standard fiber optic cables, which was officially verified on June 28th.

To put the sheer scale of this breakthrough into perspective, consider that a binary data rate of 402 Tbps (412,608 Gbps) connection could download 24,120 two-hour 4K movies (3840 x 2160-pixel resolution) in just one minute.

Over the past 65 years, data speeds have progressed from bits per second to kilobits, megabits, and gigabits, with terabits per second on the horizon for widespread commercial internet use.

What comes after terabits per second? Well, it would be a petabit per second (Pbps) data transmission rate.

To put it in perspective, 1 Pbps is equivalent to 1,073,741,824 gigabits per second (Gbps).

Stay tuned.

Friday, August 16, 2024

The newsroom’s unsung hero

© Mark Ollig


An article in New York’s Albany Evening Journal April 6, 1852, suggested using the word “telegram” as a term for “telegraphic dispatch” or “telegraphic communication.”

The article also suggested “teletype,” for which the abbreviation TTY is used.

Terms like “printing telegraph,” “teleprinter,” and “teletypewriter” would later appear.

While telegraphic printing machines existed in 1852, it took several years of improvement, including advancements in telegraphy, before their widespread use.

In 1867, Edward A. Calahan invented a stock ticker system that used a metal box-shaped device with a rotating typewheel to print stock prices onto paper tape.

The system transmitted coded electrical signals over telegraph lines and was powered by either batteries or manual magneto hand cranks.

In 1910, the Morkrum Co. installed the first commercial teletype system, transmitting financial information and news between Boston and New York via the Postal Telegraph Co.’s wire lines.

The system, called the Morkrum Printing Telegraph, featured a simplified keyboard and printed text on paper rolls.

The Associated Press (AP), founded in 1846, became an early adopter of teleprinter machines in 1914, and United Press (UP), founded in 1907, began using them in 1915.

AP and UP, both wire services, transmitted coded news updates to newspapers and radio stations using telegraph and dedicated telephone lines.

In 1921, the Morkrum Co. introduced the Model 11 type-wheel tape printer, the first commercially successful teletype machine.

In 1925, the Morkrum-Kleinschmidt Co. launched the Model 14 teletype, transforming newsrooms by automating the printing process, which significantly reduced text errors.

The clattering sound of teletypewriters filled newsrooms across the country during the 1920s and 1930s as trained operators received and processed incoming news dispatches from wire services.

During this time, teletype machines were used for communication by the FBI, commercial airlines, stock brokerage firms, and wire services.

During the 1929 stock market crash, the AP and UP transmitted urgent bulletins, market overviews, and human-interest stories to newsroom teletype machines.

In 1930, the Morkrum-Kleinschmidt Co., now the Teletype Corp., was purchased by the Bell System owned by AT&T and became a wholly owned subsidiary of the Western Electric Corporation.

Teletypes marked a turning point in the radio news industry by establishing a direct link to wire services like AP and UP, allowing journalists to access, edit, and broadcast news over the air quickly.

During the 1940s, wireless radioteletype (RTTY) machines enabled long-distance news transmission, especially in areas without wired connections.

The Teletype Model 19 RTTY, manufactured by Teletype Corp., stood more than 3 feet tall and weighed approximately 235 pounds.

The machine featured a QWERTY keyboard, teletype keys, a printer for paper roll output, and the capability to manage paper tape for message storage and transmission at approximately 45.5 bits per second.

RTTYs were used for military and international communications.

KSTP-TV, the first commercial television station in Minnesota, began broadcasting April 27, 1948.

In 1949, the US had about 98 commercial TV stations and an estimated 2.3 million television sets in use.

In April 1950, newspaper ads listed TVs like the Motorola 12T3 and 19K2 with large wooden cabinets and 10- to 19-inch black-and-white screens. These sets cost $139.95 to $449.95, equivalent to approximately $1,860 to $5,990 today.

In 1958, United Press (UP) merged with International News Service (INS) to become United Press International (UPI).

In 1963, the Teletype Corporation launched the Model 33. It weighed around 75 pounds, operated at 100 bits per second, and could connect to emerging computer networks and telephone lines with a modem.

Teletype machines used bell rings to signal urgent news bulletins from wire services like AP and UPI, labeled as “flash,” “urgent,” or “bulletin,” to alert journalists to breaking news events.

Teletype machines across the country clattered and bells rang Nov. 22, 1963, delivering the news from Dallas, TX.

CBS television news anchor Walter Cronkite reviewed the printed teletype bulletin messages from the wire services moments before his on-air announcement.

Cronkite interrupted the television soap opera “As the World Turns,” saying, “Here is a bulletin from CBS News. In Dallas, TX, three shots were fired at President Kennedy’s motorcade in downtown Dallas.”

In the CBS newsroom, reporters captured the intensity of the moment as they huddled around teletype machines, anxiously awaiting updates from wire services like AP, UPI, and Reuters.

Journalists today work on silent, internet-connected computers, a striking departure from the bustling newsrooms of yesteryear, filled with the noisy clatter of typewriters and the rhythmic hum of teletype machines – and cigarette smoke wafting in the air.

On a related note, in the 1980s, I worked with a teletypewriter terminal at the local telephone company in Winsted. This terminal was connected to the Nortel Digital Multiplex System (DMS-10) switching platform, which was the backbone of our telephone service at the time.

Today, the teletype’s legacy lives on as the unsung hero from yesterday’s newsrooms.



Friday, August 9, 2024

AI: trust, but verify

© Mark Ollig

Generative AI is transforming work by creating initial content and aiding in exploring and generating new ideas and creative possibilities.

AI models use data and computational methods to generate original content for music, programming, journalism, social media, and applications like marketing and advertising.

Generative AI can also generate and analyze images.

Some people debate the merit and credibility of AI-generated work, while others see it as a tool for inspiration, collaboration, and exploration.

A recent Pew Research Center survey of 10,191 US adults revealed mixed opinions on AI-generated content.

Pew reports that 60% want AI to cite sources for the content it generates, with 61% saying credit is needed for news organizations when AI uses its content.

AI-generated content is being created on platforms like OpenAI’s ChatGPT, Meta AI, Google’s Gemini, and Microsoft’s Copilot, all of which are trained on massive amounts of human-created data.

I have used Google’s Gemini 1.5 Pro AI platform to extract and summarize information from mixed document formats, including Word, PDFs, spreadsheets, and text files.

Alongside financial, educational, medical, and governmental institutions, the military is harnessing AI in surveillance, autonomous weapons, logistics, and training simulations.

I have witnessed AI being used in telecommunication networks to monitor performance, diagnose issues, perform maintenance, and optimize call routing based on volume. These actions make the networks more efficient in processing calls and preventing outages.

While AI can enhance creativity and innovation by automating tasks, its rise in use also presents challenges, such as its impact on employment.

Research by the McKinsey Global Institute suggests automation could displace between 400 million and 800 million jobs globally by 2030.

Another study highlights that generative AI platforms can produce inaccurate information, so one should verify their sources.

AI-generated content from platforms like Meta AI, ChatGPT, and Gemini Advanced should always be fact-checked and validated with cited sources to ensure accuracy and avoid misinformation.

AI also poses significant risks to credible information, as AI-generated content can blur the lines between real and fake.
Deepfakes, realistic but fabricated media created using AI, can manipulate public opinion and undermine democratic processes; such deceptions erode trust in AI.

Combating AI-generated misinformation requires technical solutions, such as developing tools to detect deepfakes and educational programs to teach people how to evaluate AI-provided information.

Of course, any “fictional” AI-generated content should be labeled as such.

When using AI platforms for serious work, we should be provided with the reasoning behind their conclusions, along with the sources used to draw those conclusions.

The AI industry ought to establish clear public guidelines for the development, deployment, and operation of AI systems, and allow users to report inaccuracies or misrepresentations of their data.

Several organizations, including the Partnership on AI, the Responsible AI Institute, and the AI for Good Foundation, are working together to establish ethical guidelines and standards for AI use.

The Alan Turing Institute and the Institute of Electrical and Electronics Engineers program, Global Initiative on Ethics of Autonomous and Intelligent Systems, are also collaborating on AI guidelines.

More public dialogue is needed to discuss the benefits and risks of AI-generated content, along with its ethical and responsible use.

The training of AI models with copyrighted materials has ignited complex legal debates around ownership, fair use, and authorship.

As AI-generated content continues to evolve rapidly, it raises serious questions about how copyright laws apply.

The US Copyright Office has repeatedly rejected copyright applications for AI-generated works, citing that copyright law entails human authorship requiring “creative powers of the mind,” a level AI has yet to meet.

The creative capabilities of AI models like GPT-3 and DALL-E 2 are rapidly advancing and have produced content that can be difficult to distinguish from human-created works.

There is an ongoing debate about whether AI indeed possesses the “creative powers of the mind,” as defined by human standards.

Some argue AI merely mimics patterns, lacking the intentionality and emotional depth associated with human creativity.

Others believe that AI may soon achieve a level of ingenuity that is indistinguishable from or even surpasses human creativity by generating original content that pushes the boundaries of our imaginative capabilities.

The 2021 unveiling of the advanced human-like robot Ameca in the UK and its display at the Consumer Electronics Show in Las Vegas in 2022 has raised questions about the legal status of AI’s individualism and unique creativity.

Major news wire services are increasingly using AI to automate tasks and are assessing the use of AI-generated short stories.

The Associated Press (AP) has used AI for data-driven journalism since 2014, starting with financial reports and sports summaries and later expanding to election reporting, business news, and other data-driven stories.

The Guardian and the Washington Post also employed AI to create articles and opinion pieces.

Gannett Co. Inc. is exploring AI’s capability for error checking, summarizing content, and data analysis.

Although generative AI promises to revolutionize content creation, its transformative power demands a cautious approach: “Trust, but verify.”

(right-to-use image fee paid for by me!)









Friday, August 2, 2024

GPS: part two

© Mark Ollig

Today’s Global Positioning System (GPS) has transformed navigation, from guiding us to our various destinations to supporting military operations.

Development of the world’s first global satellite navigation system began in 1958; the US Navy’s navigation satellite system known as Transit, which became fully operational in 1967.

Transit 1B, launched April 13, 1960, was the first successful satellite to support the Transit system, following the unsuccessful launch of Transit 1A in September 1959.

The US Navy utilized the Transit satellite system to assist submarines in navigating their locations by observing changes in satellite signals, similar to the way a sound from a siren changes as it approaches and then moves away.

The first GPS prototype satellite, Navigation Technology Satellite 2 (NTS-2), was launched in 1974 to evaluate GPS functionality.

The first fully operational satellite of the Navigation System with Timing and Ranging (NAVSTAR) GPS, Navstar 1 (also known as GPS-1), was launched in 1978 as part of the Block I series.

In April 1995, the GPS NAVSTAR satellite constellation, consisting of 24 satellites primarily comprised of Block II and IIA models in precisely arranged orbits, became fully operational.

Each of the GPS satellites in the NAVSTAR constellation orbits Earth twice daily at roughly 12,550 miles above the surface.

A GPS receiver calculates its two-dimensional (2D) position (latitude and longitude) using signals from at least three satellites.

If equipped, the receiver can also determine altitude with a signal from a fourth satellite, providing a precise three-dimensional (3D) location fix.

The GPS satellites use solar panels to charge batteries that power their electronics; they are also equipped with small rocket boosters to maintain their trajectory.

Civilian GPS primarily uses the L1 (1575.42 MHz) and L5 (1176.45 MHz) frequencies, with L2 (1227.60 MHz) also available for civilian use.

The military also uses these frequencies, along with the encrypted Precise (P) code on the L1 and L2 frequencies, for enhanced accuracy and protection against GPS spoofing.

GPS spoofing poses a threat by manipulating receiver locations with fake signals, potentially hijacking drones or autonomous vehicles.

Before 2000, civilians had limited GPS access due to intentionally degraded accuracy from Selective Availability, which the US government discontinued in 2000.

In 2020, the Federal Aviation Administration reported that high-quality GPS devices can determine a location on the ground within 16 feet of its actual position 95% of the time, with further improvement to 11.5 feet or less using wide area augmentation system technology.

Lockheed Martin, a global aerospace and defense company, is under contract with the US Space Force and is responsible for constructing the GPS III and future GPS IIIF satellites, with the first projected to launch at the end of 2026.

GPS provides precise positioning and navigation for various applications, including ground travel, aviation, maritime industries, and land surveying.

It also provides accurate timing, essential for transaction verification in financial networks and for use in synchronizing call routing in telecommunication networks.

Additionally, GPS precise timing and location data is used for synchronizing electrical grids, supporting emergency services’ navigation and tracking capabilities.

Most GPS satellites rely on multiple highly accurate rubidium atomic clocks for precise timing and accurate navigation. These clocks feature exceptionally long-term accuracy, losing only about one second every 300 years.

The GPS Block III system incorporates the L1C civilian signal (1575.42 MHz) to improve positioning accuracy. It is compatible with other global navigation satellite systems, such as China’s BeiDou and the European Union’s Galileo.

Suitable for civilian applications like navigation and precision positioning, the L1C signal transmits data at 1,023 bits per second using advanced modulation techniques.

The US government’s GPS website highlights that GPS III technology enhances navigational accuracy and signal reliability, benefiting numerous applications, including precision agriculture tools like GPS-guided farm tractors.

GPS III satellites SV01 to SV05 are in orbit, while GPS III-A (advanced) SV06 was launched Jan. 18, 2023, and GPS III-A SV07 will launch no earlier than January 2025.

GPS III-A satellites contain cesium atomic clocks, which use the vibrations of cesium-133 atoms to generate the clock’s signal, providing incredible precision with an estimated error of less than one second every 300,000 years.

GPS signals, while generally accurate within a few feet in clear weather, can be disrupted by the ionosphere, a layer of charged particles in Earth’s atmosphere.

This interference can cause errors and signal loss, especially during severe space weather events like geomagnetic storms and solar flares, potentially resulting in inaccuracies of tens or even hundreds of feet.

Experts are exploring how artificial intelligence (AI) could improve GPS accuracy, especially in areas with signal interference or tall buildings.

The US Space Force and the National Institute of Standards and Technology are already using AI to strengthen GPS signals and timing.

AI-GPS technology also has potential for use with real-time traffic updates and self-driving cars.

The GPS Control Segment at Schriever Space Force Base in Colorado manages a global network of ground stations that monitor, maintain, and update the current GPS satellite constellation consisting of 31 operational satellites.

The Department of Defense has requested $1.59 billion in the 2025 budget to enhance GPS capabilities.


The official US government GPS website is http://www.gps.gov/.

A GPS III-A satellite in orbit.
( Image-
Photo by gps.gov)