The rise of generative AI and unregulated content creation, exemplified by ChatGPT, poses a greater threat to truth than the “fake news” of past elections. The notion of “seeing is believing” is becoming obsolete as we enter this new era.
Currently, media coverage of generative AI is primarily focused on its novel capabilities, such as creating a realistic Drake-style song or producing a surprising image of Pope Francis. While these examples make for interesting stories, the media’s concern about AI replacing jobs is receiving less attention.
In my opinion, it seems like OpenAI’s creators of ChatGPT are conducting gain-of-function research on humanity, resembling a dystopian scenario. They argue that someone would eventually invent it, so it’s better that they do it for better control and development. However, this argument is absurd and flawed. Here’s a glimpse of what the future holds:
During his State of the Hack talk at the RSA security conference in San Francisco, NSA cybersecurity director Rob Joyce stated that non-English speaking hackers from Russia won’t be creating poorly crafted emails to target your employees anymore. This was part of a larger warning about the difficulty in distinguishing between what is real and what is artificial.
According to NSA cybersecurity director Rob Joyce’s State of the Hack talk at the RSA security conference in San Francisco, adversaries, including nation-states and criminals, are experimenting with ChatGPT-style generation to create native-language English content that makes sense and passes the “sniff test.” These capabilities are already here today.
Returning to politics, another aspect to consider is that when generative AI becomes so advanced that it’s difficult to distinguish real from fake, there will be a non-technical issue: How will the public respond to the next scandalous revelation, such as the “grab-’em-by-the-you-know-what” comment made by Trump? If deepfakes become indistinguishable from reality, would such outrageous claims be considered too wild to believe? It might even become common for candidates to blame their controversial statements on deepfakes, knowing that it would be nearly impossible to prove otherwise.