In 2023, many people started thinking a Matrix-style future could become a reality. In an era of constant technological advancement, we are gradually becoming desensitised to groundbreaking innovations. However, the explosion of generative artificial intelligence (AI) in the past year created a watershed moment. AI’s vast potential to reshape our working lives renewed humanity’s fascination and obsession with the intricate relationship between man and machine.
Like many, I saw the headlines speculating about how AI advancements will impact our daily lives and jobs. I’ll admit, resisting the hype is tough, especially when large language models achieve near perfect scores on tests. We even have technology leaders making statements like the future of AI will be “better than humans at pretty much whatever humans can do”.
Interactions with friendly bots and virtual assistants are now a routine part of many people’s everyday lives. Have you talked to Alexa or Siri today? Even before the prolific rise of generative AI, the most personal parts of our lives were integrated with AI. But now, AI is more tangible than ever for people of all ages, education levels and fields as tools capable of generating text and images based on user prompts become more mainstream.
Both the enthusiasm and anxiety have led people to contemplate the potential of this intelligent technology, whether it’s aiding us or possibly taking our place.
In the Matrix series, Morpheus famously reassured a humanity on the edge of extinction, declaring, “We’re still here.” And indeed, we are. While generative AI undeniably holds the potential to drive global innovation and economic empowerment, it’s not the world’s deus ex machina.
Today, we stand at a crossroads. It’s our intrinsic duty to steer this powerful technology and ensure it functions with a moral ethos, serving us rather than replacing us. In fact, human feedback plays a crucial role in training AI, requiring vigilant supervision to ensure model transparency, ethical implementation and the development of regulations governing its use.
So, what comes next? In a future filled with uncertainties, one thing remains clear – civilization is not on the brink of war with an army of hostile Agent Smiths (as much as sensationalist writers would have you believe otherwise). Instead, AI is reshaping our operations in an increasingly digital world. For us humans, the best way forward is to create AI’s systems, processes, and guiding principles, ensuring we’re active participants, not passive bystanders, in our own AI revolution.
Here are three ways we may witness the evolution of the human-AI relationship in 2024 and beyond:
1. Upskilling with new AI regulation
Across governments and geographies, bills will be introduced to specifically guarantee human oversight for AI. These regulations will protect certain jobs from displacement and help create roles that previously didn’t exist. In areas where it’s more efficient and effective to deploy AI, there will be requirements to upskill employees for roles overseeing, developing and maintaining the technology. We’ll also see more doors open for employees to team up with AI, giving way to superhuman performance.
Discussions will also heat up about who is responsible for preparing the future workforce for hybrid human/AI roles. Government leaders may debate the merit of higher education curricula and on-the-job-training to account for potential skills gaps between entry level roles and more senior levels employees, whose tasks are now accomplished through AI.
2. Reevaluating strategic business plans
Business leaders must examine the growing role intelligent technologies will have in their workplace operations or they risk an array of challenges, value losses and talent drain. Some businesses should consider pitching their long-term strategic plans if they fail to consider AI’s role beyond performing certain tasks or functions. AI will have a profound impact on many organization’s structures and scale, and business should start planning for that now.
This will become more urgent as AI shifts into autonomous stages of artificial general intelligence — where omnipresent applications may be more independent of humans. Executives cannot afford to look at it as just another workplace technology.
3. Deepfakes will pose a threat in customer experience
AI deepfakes are making waves because of the potential to spread disinformation or misrepresent recognizable faces. The average person is also at risk. We’ll see bad actors deploying deepfakes of consumers to overcome biometric verification and authentication methods. While this technology is in its infancy, the rapid advancement of AI accelerates the likelihood for customer trust and security to be disrupted. Businesses will need to focus on implementing authentication technology to address these risks, so their customers are protected and not easily exploited.
The human touch amidst artificial progression
As society continues to pursue groundbreaking technological advancements, the words written by our favorite science fiction authors increasingly mirror reality. While we adapt, it is vital to remember the importance of human intuition is at the core of machine and model learning. Remember, even the Matrix still relied upon humans.
Brett Weigl is SVP & GM at Genesys, leading offers for Experience Orchestration across Digital, AI, and Journey Analytics on the Genesys Cloud CX platform.