Tweet This! :)

Friday, August 9, 2024

AI: trust, but verify

© Mark Ollig

Generative AI is transforming work by creating initial content and aiding in exploring and generating new ideas and creative possibilities.

AI models use data and computational methods to generate original content for music, programming, journalism, social media, and applications like marketing and advertising.

Generative AI can also generate and analyze images.

Some people debate the merit and credibility of AI-generated work, while others see it as a tool for inspiration, collaboration, and exploration.

A recent Pew Research Center survey of 10,191 US adults revealed mixed opinions on AI-generated content.

Pew reports that 60% want AI to cite sources for the content it generates, with 61% saying credit is needed for news organizations when AI uses its content.

AI-generated content is being created on platforms like OpenAI’s ChatGPT, Meta AI, Google’s Gemini, and Microsoft’s Copilot, all of which are trained on massive amounts of human-created data.

I have used Google’s Gemini 1.5 Pro AI platform to extract and summarize information from mixed document formats, including Word, PDFs, spreadsheets, and text files.

Alongside financial, educational, medical, and governmental institutions, the military is harnessing AI in surveillance, autonomous weapons, logistics, and training simulations.

I have witnessed AI being used in telecommunication networks to monitor performance, diagnose issues, perform maintenance, and optimize call routing based on volume. These actions make the networks more efficient in processing calls and preventing outages.

While AI can enhance creativity and innovation by automating tasks, its rise in use also presents challenges, such as its impact on employment.

Research by the McKinsey Global Institute suggests automation could displace between 400 million and 800 million jobs globally by 2030.

Another study highlights that generative AI platforms can produce inaccurate information, so one should verify their sources.

AI-generated content from platforms like Meta AI, ChatGPT, and Gemini Advanced should always be fact-checked and validated with cited sources to ensure accuracy and avoid misinformation.

AI also poses significant risks to credible information, as AI-generated content can blur the lines between real and fake.
Deepfakes, realistic but fabricated media created using AI, can manipulate public opinion and undermine democratic processes; such deceptions erode trust in AI.

Combating AI-generated misinformation requires technical solutions, such as developing tools to detect deepfakes and educational programs to teach people how to evaluate AI-provided information.

Of course, any “fictional” AI-generated content should be labeled as such.

When using AI platforms for serious work, we should be provided with the reasoning behind their conclusions, along with the sources used to draw those conclusions.

The AI industry ought to establish clear public guidelines for the development, deployment, and operation of AI systems, and allow users to report inaccuracies or misrepresentations of their data.

Several organizations, including the Partnership on AI, the Responsible AI Institute, and the AI for Good Foundation, are working together to establish ethical guidelines and standards for AI use.

The Alan Turing Institute and the Institute of Electrical and Electronics Engineers program, Global Initiative on Ethics of Autonomous and Intelligent Systems, are also collaborating on AI guidelines.

More public dialogue is needed to discuss the benefits and risks of AI-generated content, along with its ethical and responsible use.

The training of AI models with copyrighted materials has ignited complex legal debates around ownership, fair use, and authorship.

As AI-generated content continues to evolve rapidly, it raises serious questions about how copyright laws apply.

The US Copyright Office has repeatedly rejected copyright applications for AI-generated works, citing that copyright law entails human authorship requiring “creative powers of the mind,” a level AI has yet to meet.

The creative capabilities of AI models like GPT-3 and DALL-E 2 are rapidly advancing and have produced content that can be difficult to distinguish from human-created works.

There is an ongoing debate about whether AI indeed possesses the “creative powers of the mind,” as defined by human standards.

Some argue AI merely mimics patterns, lacking the intentionality and emotional depth associated with human creativity.

Others believe that AI may soon achieve a level of ingenuity that is indistinguishable from or even surpasses human creativity by generating original content that pushes the boundaries of our imaginative capabilities.

The 2021 unveiling of the advanced human-like robot Ameca in the UK and its display at the Consumer Electronics Show in Las Vegas in 2022 has raised questions about the legal status of AI’s individualism and unique creativity.

Major news wire services are increasingly using AI to automate tasks and are assessing the use of AI-generated short stories.

The Associated Press (AP) has used AI for data-driven journalism since 2014, starting with financial reports and sports summaries and later expanding to election reporting, business news, and other data-driven stories.

The Guardian and the Washington Post also employed AI to create articles and opinion pieces.

Gannett Co. Inc. is exploring AI’s capability for error checking, summarizing content, and data analysis.

Although generative AI promises to revolutionize content creation, its transformative power demands a cautious approach: “Trust, but verify.”

(right-to-use image fee paid for by me!)