For more than a year, generative AI has dominated news cycles within, and outside of the tech world. Unlike many fleeting tech fads, its momentum remains unyielding. Its significance is felt across borders, transcending both geographical and sector-based demarcations. Just days ago, Meta announced that it would be rolling out new AI experiences across its network of apps, platforms, and social experiences. Similarly, Amazon has announced a potentially industry-altering partnership with Anthropic, the AI safety and research giant operating out of San Francisco. Not to be left in the dust, Alphabet’s Google has also been ardently working at recovering some of the prominence in AI research that it has lost to competitors like OpenAI, or the above mentioned, and has even launched an educational initiative offering both free and paid courses focusing on generative AI. Advancements in generative AI, and increasingly creative applications of such, continue to capture the imagination of tech enthusiasts and professionals worldwide. However, like most technological leaps, this rapid advancement is not without its hurdles. The swift evolution and impressive integrations offered up by AI solutions can easily distract from the urgent requirement for solid governance structures. Ethical considerations, especially those surrounding the originality of works produced by AI, data confidentiality, inherent biases in AI algorithms, and potential misapplications, are still lagging behind the speed at which new generative AI applications are released to the public. Though generative AI has become central to legislative discussions, what is truly necessary is that we move from discussion to action. Countries worldwide are now in a race against time, striving to draft judicious regulations that foster AI innovation while simultaneously safeguarding the rights and safety of their citizens.
On the optimistic front, the prospective uses of generative AI are both vast and revolutionary. But the reality on the ground paints a contrasting picture. McKinsey’s research reveals that in 2023, only about 21% of companies have instituted a well-defined AI governance program. This is a surprisingly low figure, especially when considering the importance of such programs in ensuring that AI tools are used responsibly. With further consideration given to the fact that companies are increasingly recruiting AI specialists, as well as training their teams to utilize AI tools, it is extremely low. In a survey focused on the potential hazards of using generative AI at work, McKinsey found that most participants indicated their businesses are not addressing the most frequently mentioned risk: inaccuracy. This year, a mere 32% claim they’re taking steps against inaccuracies. Similarly, only 38% are addressing cybersecurity threats. What is even more alarming is that last year’s figures were higher. Following comparable research held in 2022, 51% reported addressing AI-related cybersecurity concerns. Across both years, the majority of participants consistently reported that their institutions are neglecting AI-associated risks. So long as we want to continue collaborating with AI, this has to change.
The implications of the widespread use of generative AI for business, or creative purposes are profound. As industries harness its capabilities, we are likely to witness a surge in job roles that were previously unimaginable. Designers collaborating with AI to fashion unique art pieces, writers co-authoring with machines to craft compelling narratives, or architects leveraging AI to conceive sustainable urban landscapes are just a few potential scenarios. This is all great, but there’s also the looming fear of job displacements in sectors where AI can perform tasks more efficiently than humans. Similarly, fears of AI utilizing the data sets it was trained on, to produce content too close to the original human works, are far from unfounded. In fact, many authors, creators, and designers, have seemingly lost the right to protect their creations from ending up as part of AI training. One only has to perform a quick Google search to find almost endless instances of creators grieving over works fed to generative AI models without their permission. While it is true that some creators don’t mind, or even encourage use of their work, this is a decision that each individual should have the right to make independently. In both cases, whether AI is able to perform a job better than a human, or whether it is “borrowing” (read as: stealing) from human made works to supplement its abilities, there is a loss of human opportunity.
The social and cultural impact of generative AI deserves the attention it has garnered, both positive and negative. There is no way to deny this. Where we must be careful, though, is how we act on this attention. Are we simply oohing and aahing at the flashy advancements being paraded to us, or are we considering their implications and whether the price we will pay for these novelties is worth it in the long run? As AI systems generate content, be it music, literature, or visual art, we will be, and already are being, prompted to reconsider our definitions of creativity and originality. How will societies value human-made art versus AI-generated creations? Where will we draw the line? How do we protect those who create from scratch? As generative AI plays an increasing role in shaping online content and interactions, it could reshape our very perceptions of reality and truth. We cannot simply stand by and allow this to happen without collective input, and consent. In this transformative era, it’s imperative to foster a collective awareness and engage in critical discourse to navigate the promises and pitfalls of generative AI responsibly.