Artificial intelligence has gone mainstream.
Once relegated to the realms of science fiction and speculative research, AI technologies such as ChatGPT and Bard chatbots have seamlessly integrated into the daily lives of millions.
Despite their current prevalence, experts assert that we are merely scratching the surface of their potential.
Léa Steinacker, the Chief Innovation Officer at the startup ada Learning and author of an upcoming book on artificial intelligence, draws a parallel between the current state of AI and the pivotal moment when Apple introduced the iPhone in 2007.
This comparison, evoking the widespread adoption of mobile internet access, emphasizes the transformative impact AI is poised to have.
In Steinacker’s words, “AI has reached its iPhone moment,” signifying a significant turning point. She emphasizes that applications like ChatGPT have democratized access to AI tools for end-users, thereby exerting a profound influence on society as a whole.
Will deepfakes help derail elections?
So called generative AI systems, individuals now possess the capability to swiftly craft compelling texts and visuals from scratch within mere seconds.
This has ushered in an era where the creation of “deepfake” content, portraying individuals engaging in actions or utterances they never actually did, has become more accessible and cost-effective than ever before.
As the significant elections of 2024 draw near, spanning from the US presidential race to the European Parliament elections, analysts anticipate a notable upswing in the creation of deepfakes. The primary objective is to influence public sentiment or instigate unrest in the lead-up to the vote.
Juhan Lepassaar, executive director of the EU’s cybersecurity agency, sounded a cautionary note in mid-October, emphasizing that the trustworthiness of the EU electoral process hinges on our ability to depend on cyber-secure infrastructures, as well as the integrity and accessibility of information.
The magnitude of deepfakes’ impact will be heavily contingent on the proactive measures taken by social media entities to counter them.
Various platforms, including Google’s YouTube and Meta’s Facebook and Instagram, have instituted protocols to identify AI-generated content. The imminent year will serve as a pivotal litmus test for gauging the effectiveness of these strategies.
Who owns AI-generated content?
In the pursuit of crafting “generative” AI tools, companies embark on training the foundational models by inundating them with extensive datasets comprised of texts or images harvested from the vast expanse of the internet.
So far, the utilization of these resources has transpired without securing explicit consent from the original creators — be they writers, illustrators, or photographers.
However, rights holders are mounting a counteroffensive, deeming these practices as encroachments upon their copyrights.
A recent legal development saw the New York Times initiating legal action against OpenAI and Microsoft, the entities behind ChatGPT, alleging the unauthorized use of millions of the newspaper’s articles.
San Francisco-based OpenAI is also facing a legal challenge from a consortium of prominent American novelists, including John Grisham and Jonathan Franzen, who assert that their works were utilized without consent.
Several other legal battles are currently in progress. Notably, the photo agency Getty Images has filed a lawsuit against the AI Company Stability AI, responsible for the Stable Diffusion image creation system, citing the analysis of its photos.
The initial rulings in these cases, expected in 2024, bear the potential to establish precedents dictating the necessary updates to existing copyright laws and practices in the era of AI.
Who holds the power over AII?
With the advancing sophistication of AI technology, the process of developing and training underlying models is growing more intricate and costly for businesses.
Digital rights advocates caution that this trend is consolidating cutting-edge expertise within the grasp of a select few influential corporations.
Fanny Hidvegi, Director of European Policy and Advocacy at the nonprofit Access Now in Brussels, underscores that this concentration of power—encompassing infrastructure, computing capabilities, and data—reflects a persistent issue in the tech domain.
As AI technology embeds itself as an indispensable facet of people’s lives, she alerts that a handful of private entities will wield substantial influence in shaping how AI will transform society.
How to enforce AI laws?
Against this backdrop, experts agree that — just as cars need to be equipped with seat belts — artificial intelligence technology needs to be governed by rules.
In December 2023, following extensive negotiations spanning years, the EU reached an accord on its AI Act—a pioneering and comprehensive set of specific laws dedicated to artificial intelligence.
All attention is now directed towards regulators in Brussels, with anticipation regarding their commitment to enforcing the freshly established rules. Anticipated are robust discussions on the necessity and methodology of potential adjustments to these rules.
Léa Steinacker emphasizes, “The devil is in the details,” indicating that both the EU and the US will likely engage in protracted debates concerning the practical nuances of these novel laws.
Read More: AI Chatbots like ChatGPT can tell lies, cheat and even commit crimes