AI: The Good, the Bad and the Unknown
A new report from USC explores the future of the nascent technology
The University of Southern California’s Annenberg Center for Public Relations just released its Relevance Report, an annual publication that identifies emerging issues and trends impacting society, business, and communications in the coming year.
This year’s report takes a sweeping look at AI — its impacts on society and its applications for business. If you want a one-stop shop to understand the road ahead for this powerful technology, particularly in the field of communications, this is it.
At The Dialogue Project, a program at Duke University where I work with founder Bob Feldman and a steering committee from the Fuqua School of Business, AI is a hot topic. The Dialogue Project explores and enables ways that business can help reduce polarization in society; the rise of AI will play an obvious role in that dynamic. Our contribution to the Relevance Report follows (a quick read with active links), but I encourage you to take a look at the full report here.
We’re long past the point of debating whether we should adopt AI as a business tool. We’re already doing it. The future is here, and USC is leaning into it.
AI Creates a Greater Risk To a Polarized Population
Society was transformed by two so-called revolutions over the last two centuries — the Industrial Revolution and the Information Revolution.
Neither revolution came without costs. The early days of the Industrial Revolution, for all its benefits, also produced enormous downsides, including urban tenements, dangerous working conditions, child labor and environmental degradation. It wasn’t until the 20th century that society began addressing these externalities at scale, most notably on the environment.
The Information Revolution began with mainframe computing in the mid-20th century, which was the precursor to the rise of the Internet at the turn of the century. The Internet ushered in a staggering array of benefits, to the point where it’s hard to imagine life without it. But it also created enormous vulnerabilities from hyperconnectivity, including systemic failures like the Crowdstrike outage in July and the rise of social media as a platform for misinformation and polarization. We are only now struggling to manage the negative fallout from social media, such as Meta’s new guardrails around the use of Instagram by adolescents.
To the extent that social media has contributed to polarization and the collapse of trust in society, which has been well-documented, AI has the potential to increase the trend by an order of magnitude.
And so it is with the rise of Generative AI, the latest and most profound iteration of the Information Revolution. For all the benefits it promises, in the early days we’re likely to see as much harm as good from this new technology, particularly in the social realm. To the extent that social media has contributed to polarization and the collapse of trust in society, which has been well-documented, AI has the potential to increase the trend by an order of magnitude.
“As trust diminishes, people retreat into smaller circles of familiarity or isolate themselves entirely, leading to a cycle of further erosion of social bonds and shared purpose,” wrote Kye Gomez on Medium.
“The consequences of this shift are profound and far-reaching, manifesting in political polarization, economic uncertainty, social fragmentation, and a pervasive sense of discontent that seems to defy our material progress.”
Social media is fueled by algorithms. They feed us what we want to see and hear based on our history, creating a funnel effect that increasingly constricts our world view. Platforms like X and Instagram, of course, can be curated to include a diversity of content, but this takes effort and doesn’t create the dopamine effect that the algorithm does.
Generative AI poses a two-fold challenge. It can supercharge the algorithm and greatly amplify the funnel effect. And it brings with it the ability to create fake content that increases engagement and credibility. The rapid spread of the pets-as-dinner rumor that arose in Springfield, Ohio in September was no doubt fueled by the proliferation of AI-generated memes featuring Donald Trump as the leader of an army of cats.
Here’s another example: a new social platform, SocialAI, presents itself as a variation of X, but it’s entirely populated by chatbots. The user chooses what types of bots they want to interact with, like “fans,” “supporters,” “haters,” “doomers,” and so forth. The implications for building deep, personal, and polarizing echo chambers are obvious.
On the other hand, a new application called DepolarizingGPT aims to steer users to the radical center of fractious debates. Type in a subject or question and the app serves up a left-wing response, a right-wing response, and a “depolarizing” response.
But in these early days of Generative AI, as society grapples with the new technology, the hazards may be just as great or greater than the benefits, just as they were during the rise of previous paradigm-shifting technologies. According to a Pew survey of 300-plus experts (tech innovators, business and policy leaders and academics), 37 percent said they were more concerned than excited about the potential of AI, with only 18 percent saying they were more excited than concerned; 42 percent said they were equally excited and concerned.
At The Dialogue Project at Duke University, where we are focused on enabling businesses to help bridge society’s polarization, this is a vital and emerging issue. Over time, we believe the world will create ethical and operating guidelines for Generative AI to deliver benefits responsibly and equitably. But these early days of rapid innovation and minimal guardrails compel us all to be aware of these potential hazards and do what we can within our respective realms to counteract them.
Exceedingly relevant. Thanks, Russ.
I love the work that you are doing at The Dialogue Project Russ. Such important work. I also love how you compared revolutions and unintended consequences that were subsequently overlooked because the profits were rolling in, until hyper-connectivity, when the world could know in a nanosecond what you were doing wrong with videos to prove it. I believe, as you know, in responsible progress, but we seem to have a tolerance for some level of harm with progress. I have seen first hand, how developing countries have prospered with the digital revolution. When I ran sustainability at Millicom, babies in Africa were registered in the census and benefited from government programs. The unbanked got to be banked. SMEs increased their profits by digitalizing their businesses. Women in remote rural towns were educated in business 101 and sold their products on Facebook and Instagram. And children who would have missed out on 1 year of education during Covid learned online with teachers who were up-skilled. The list is big. This is good progress. I hope that your work continues. We need it. I’d love to learn more about it. I know that regulation is unpopular, but on AI, it is necessary. Our failure to regulate social media has provided valuable lessons.