Crypto lost its crown, an AI leader spoke, and Apple joined the struggling VR market – the year’s significant stories.
Happy Christmas! We’ve nearly navigated another year without succumbing to a super-intelligent AI, becoming Martian laborers for a mad billionaire, or being disconnected by a Carrington event. Despite the lack of world-altering events, it’s been a bustling year. The benefit of a leisurely week (hoping not to tempt fate) is the opportunity to ponder the past 12 months and recognize that, occasionally, only a handful of stories truly hold significance.
In winter, the Guardian faces a ransomware assault.
In December, the Guardian acknowledged experiencing a ransomware breach, revealing that the personal data of UK staff members was compromised in the incident.
Guardian Media Group’s CEO, Anna Bateson, and the Guardian’s editor-in-chief, Katharine Viner, stated, “We believe this was a criminal ransomware attack, and not the specific targeting of the Guardian as a media organisation.”
They emphasized the increasing frequency and sophistication of such attacks in the past three years across organizations of various sizes and types globally. Despite the breach, they reassured, “We have seen no evidence that any data has been exposed online thus far and we continue to monitor this very closely.
In my initial draft of this newsletter, I almost declared, “Obviously, this was the most important story in my life this winter.” Fortunately, a timely recollection that my son was born in January spared me from sending that off to my editor. While it might come across as self-indulgent to label the ransomware attack as one of the year’s biggest stories, especially for those outside the Guardian, it set the tone for the year ahead.
The incident underscored that cybercrime remains a persistent threat, capable of inflicting damage on institutions akin to more overt acts of vandalism. Fast forward almost a year, and the landscape appears unchanged. A recent parliamentary report highlights the UK government’s vulnerability to a “catastrophic ransomware attack,” capable of bringing the country to a standstill due to inadequate planning and insufficient investment.
The report also raises concerns about the potential for future ransomware attacks to extend beyond digital disruption, posing a tangible threat to the physical security and safety of human life. It warns of the possibility of cyber-attackers sabotaging Critical National Infrastructure (CNI) operations, with the added risk of intercepting “cyber-physical systems.” This includes the alarming prospect of hackers taking control of essential functions such as steering and throttle on a shipping vessel, a feasibility demonstrated in laboratory experiments.
Spring brings forth the voices of AI pioneers.
The concept of “existential risk” in the realm of artificial intelligence has been a topic of discussion for over a decade, gaining popularity in 2014 through Oxford philosopher Nick Bostrom’s book, “Superintelligence.” However, mainstream treatments of the idea, suggesting that an excessively capable AI could spell the end of civilization, often leaned towards dismissal with comparisons to Terminator scenarios.
A significant shift occurred in the spring of 2023, driven in part by the decision of Geoffrey Hinton, one of the three “godfathers of AI,” to resign from his position at Google. Instead, he chose to spend his retirement sounding the alarm about the potential risks:
“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice… My confidence that this wasn’t coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.”
Not all of Hinton’s AI peers, including fellow “godfather” Yann LeCun, share the same viewpoint. LeCun dismissed the scenario as preposterous, stating, “Intelligence has nothing to do with a desire to dominate.” He argued that even for humans, the smartest individuals, such as Albert Einstein, did not seek domination.
Regardless of individual opinions, it is undeniable that the consideration of “AI risk” has gained a seriousness that was not present just a year ago. This shift is exemplified by events like the UK’s AI safety summit, initiated by Rishi Sunak after his numerous domestic successes seemingly left him with no other pressing matters to attend to.