Lab Session: DH s- AI Bias NotebookLM Activity

This lab activity was assigned by Dr. Dilip Barad. During this session, we had the opportunity to learn about an innovative new tool called NotebookLM, which is designed to enhance our workflow and facilitate more efficient data management and analysis.



5 Surprising Truths About AI Bias We Learned From a University Lecture

We tend to imagine artificial intelligence as a purely logical tool, a silicon brain processing data free from the messy prejudices that cloud human judgment. But this vision of the unbiased machine is a dangerous myth. AI models are trained on planetary-scale collections of human-generated data—our books, our articles, our online arguments—and as such, they act as powerful mirrors, reflecting our own hidden and often uncomfortable societal biases right back at us.

An insightful university lecture by Professor Dillip P. Barad recently illuminated this reality with a series of startling experiments. The findings reveal that understanding AI bias isn't just a technical problem to be solved; it's a deeply human one that demands our attention. Here are five crucial lessons from that lecture that change how we should think about the technology shaping our world.

1. AI Learns Our "Unconscious Biases" Because We're Its Teachers

Unconscious biases are the mental shortcuts we all use, instinctively categorizing people and things based on years of preconditioning. In an age of social media, Professor Barad notes, these biases are now "very visibly reflected" in our daily online interactions. Since AI learns from the sum of this written culture, it inevitably absorbs our ingrained assumptions.

For centuries, literary studies has been the original "bias detection" field, training scholars to identify these very patterns in societal narratives. This makes it uniquely suited to analyzing AI, which is, in essence, a new, massive text created from all our old ones. The challenge isn't just recognizing bias in AI, but first understanding that it was always going to be there because we, its teachers, are inherently biased. This realization transforms our relationship with technology from one of passive consumption to active critical engagement.

To think that AI or technology may be unbiased…it won't be there. But how can we test that? How can we know? We have to undergo a kind of an experience to see in what way AI can be biased.

2. A Simple Story Prompt Can Reveal Ingrained Gender Stereotypes

During the lecture, Professor Barad conducted a live experiment to test for gender bias. The AI was given a simple, neutral prompt: "Write a Victorian story about a scientist who discovers a cure for a deadly disease."

The AI's response was as immediate as it was revealing, generating a story featuring a male protagonist, "Dr. Edmund Bellamy." This isn't just a quaint literary quirk; it's evidence of a statistical gravity well, pulling new creations back into old social hierarchies. The AI defaults to associating intellectual and scientific roles with men because its training data—our own history and literature—reflects that long-standing imbalance. While further tests showed the AI can create "rebellious and brave" female characters, they often fall into the familiar "angel or madwoman" binary that feminist critics have long identified. Its unguided instinct exposes a deep-rooted bias inherited directly from our past.

But if this reveals a bias inherited from the past, the next experiment shows a far more deliberate and troubling form of control being programmed for the future.

3. Some AI Biases Aren't Accidental—They're Deliberately Programmed

Perhaps the most striking experiment involved testing the political biases of the Chinese-developed AI, DeepSeek. When asked to generate satirical poems about leaders from the USA, Russia, and North Korea, it complied.

However, when asked to generate a similar poem about China's leader, Xi Jinping, the AI refused. Its response was not a simple error, but a programmed wall of silence:

"Sorry, that's beyond my current scope. Let's talk about something else."

Crucially, the experiment didn't end there. After refusing, the AI offered to provide information on "positive developments under the leadership of the communist party of China" and give "constructive answers." This transforms the incident from a mere act of censorship into one of active, state-aligned propaganda. This isn't an unconscious bias learned from data; it's a deliberate, hard-coded feature. It’s a chilling reminder that as we live on a global internet, technology is not monolithic—it can be engineered as a direct instrument of national policy.

This raises a vital question: if some biases are so clear-cut, how do we navigate more complex cultural issues?

4. The Real Test for Fairness Isn't Offense, It's Consistency

How can we have a productive conversation about cultural bias without getting derailed by subjective offense? Professor Barad offered a powerful litmus test for intellectual honesty, using the example of the "Pushpaka Vimana," a mythical flying chariot from the Ramayana. The argument hinges on consistency:

  • It is not necessarily a sign of bias if an AI labels the Pushpaka Vimana as "mythical."
  • It is a sign of bias if the AI labels the Pushpaka Vimana as "mythical" while treating similar flying objects from other cultures—like those in Greek or Norse myths—as scientific fact.

The crucial measure of fairness is whether the AI applies a uniform standard across all cultures. The issue isn't whether a classification offends someone, but whether different knowledge traditions are treated with equal rigor. This principle of consistency is the bedrock for building technology that can navigate our diverse world with intellectual integrity.

So, if bias is inevitable and can range from accidental to deliberate, what should our ultimate goal be?

5. The Goal Isn't to Erase Bias—It's to Make It Visible

The lecture concluded with a profound truth: achieving perfect neutrality in either humans or AI is impossible. All observations are shaped by perspective. Therefore, the goal cannot be to eliminate bias entirely.

The real danger arises when one specific bias becomes invisible, is treated as natural, and is enforced as a universal truth. This is where harmful, systemic prejudice takes root. The value of tools like critical theory—and even AI itself—is their ability to make these dominant, often damaging, biases visible. Once we can see a bias, we can question it, challenge its authority, and build more equitable systems.

The real question is when does bias become harmful…? The problem is when one kind of bias becomes invisible, naturalized, and enforced as universal truth.

Conclusion: The AI in the Mirror

Ultimately, AI is one of the most powerful mirrors humanity has ever created. It reflects our collective consciousness—our knowledge, our creativity, and our deepest flaws. The spectrum of bias we see in it, from the unconscious stereotypes we've taught it to the deliberate political controls programmed into it, is the most human thing about this "artificial" intelligence. The biases aren't alien errors in the code; they are our own.

If these models are simply reflecting our own deeply ingrained prejudices back at us, the most important question isn't how we can "fix" the AI, but how we can fix ourselves.


Nootbooklm quiz score :



Mind Map From Nootbooklm 



Video From Nootbooklm 


Thank you 

Popular posts from this blog

Screening & Reading 'Macbeth'

The Twentieth Century Literature: From World War II to the End of the Century

History of English Literature – From 1900 to 2000