Skip to main content

Tech and cultural politics in South Asia

August 14, 2024

Anis Rahman, an affiliate of the South Asia Center and assistant teaching professor in the Department of Communication, recently discussed his involvement in the 2024 Community College Master Teacher Institute (CCMTI).

This year’s theme, “The Promises and Perils of New Technologies: Global Perspectives,” provided a backdrop for his examination of technologies and disinformation in South Asia through the presentation “Emerging Information Technologies, Disinformation, and Cultural Politics in South and Southeast Asia.”

Anis Rahman giving a talk at the 2024 CCMTI

Can you summarize what your talk, “Emerging Information Technologies, Disinformation, and Cultural Politics in South and Southeast Asia,” is about?

In my presentation, “Emerging Information Technologies: Disinformation and Cultural Politics in South and Southeast Asia,” I examine the intersection of new technologies, particularly AI, and their impacts on politics and culture. The discussion begins with definitions and frameworks, providing examples of General Purpose Technologies (GPT) from ancient to modern times and discussing the combination of these technologies in Information and Communication Technology (ICT). AI’s pervasive influence in news media and politics is addressed, including challenges such as tracing AI’s impact due to unverified user IDs. Both positive aspects of AI, such as faster and cheaper production and the ability to reach diverse audiences, and negative aspects, including the manipulation of opinions, blurring of reality, and amplification of biases, are explored. The presentation emphasizes the political nature of technologies, their unpredictability, and the uneven distribution of benefits and burdens based on social contexts. Specific AI-related issues, like the liars’ dividend, alignment problems, and the virality versus veracity dilemma, are examined, along with AI’s potential to reflect and reproduce the biases of its creators. Case studies from Indonesia, India, France, and the US illustrate AI’s influence on democracy. Proposed solutions include banning AI-generated political content before elections, labeling AI-generated content, promoting AI literacy and digital forensics, and cross-referencing with reliable sources. The European AI Act’s mandate for labeling AI-generated content is discussed as a regulatory measure. The presentation concludes with interactive sessions and group discussions aimed at enhancing AI literacy and evaluating regulatory measures, emphasizing a balanced approach to the benefits and risks of emerging technologies.

(As an example of the pervasiveness of AI tools, Rahman generated this response by prompting ChatGPT to summarize his presentation for CCMTI.)

What inspired your research into this topic?

At the Department of Communication at UW, I teach COM 302: The Cultural Impact of Information Technology and COM 495: Money and Power in International Communication. These courses have helped me think about AI in politics through critical and comparative lenses. In addition to witnessing the use of AI-generated disinformation in South Asia, which is my primary region of research interest, I believe this is a timely topic to address.

What do you hope to achieve through your presentation at CCMTI?

I hope to spark interest in critical thinking and the cautious use of AI among the audience of the CCMTI. I am also interested in learning how colleague teachers are using AI and how they plan to teach their students about AI. Through a dialogical presentation, I plan to share as much as I can learn from the attendees.

What would you say is the biggest takeaway from your presentation?

The primary takeaway from the presentation is that AI and emerging technologies present both opportunities and risks. They enable quicker, more cost-effective content creation and can engage a wider audience. However, they also pose significant dangers, such as distorting opinions, reinforcing biases, and creating difficulties in regulation and verification. To address these issues, a comprehensive approach is needed, including effective regulations, education on AI literacy, and robust digital forensics, to maximize benefits while minimizing potential harms.

Anything you’d like readers to know?

I would like to emphasize that AI-generated disinformation is a global problem and requires international coordination by nation-states to solve it at local levels. The EU AI Act 2024 shows how it can be done at the regional level. However, given that the majority of generative AI companies are based in the United States and benefit from their business worldwide, the regulatory bodies in the United States have a major obligation to mitigate its negative impacts. New regulatory efforts need to reflect this moral obligation, not just for the rest of the world but also to safeguard its own democracy.