Explore how character stereotypes from video games, anime, and manga are mirrored in AI images—whose reflections reveal how we see ourselves and others.
{{ vm.tagsGroup }}
10 Oct 2025
6 Min Read
Preevena Devi (Contributing Writer)
Explore how character stereotypes from video games, anime, and manga are mirrored in AI images—whose reflections reveal how we see ourselves and others.
You boot up a new game, and the tropes feel all too familiar. The hero is stoic and broad-shouldered, destined to save the world. The women are demure and wide-eyed, often scantily dressed, caught somewhere between love interest and sidekick. The villains? Sometimes dark-skinned, foreign-accented, or cloaked in symbols of a fallen church. These patterns recur across anime and manga, too, where gender is exaggerated, race is exoticised, and religion is subverted into spectacle. They’re familiar enough to feel comfortable, yet far from harmless.
Stereotyping in popular culture matters because it both mirrors and moulds our perceptions of identity, power, and belonging. For decades, these forms of media have quietly reinforced such ideals, embedding them deep within the cultural imagination. As technology advances, so too do the ways these portrayals move through society. With the rise of artificial intelligence (AI), the same tropes that once belonged to fictional worlds now surface in our digital spaces—amplified by algorithms, replicated at scale, and often without oversight.
If gaming, anime, and manga were where these stereotypes first crystallised, AI may be the medium through which they begin to multiply. But how exactly do AI systems and algorithms learn, reproduce, and distort the biases of the datasets and cultural contexts that trained them?
Feature photo credit(s): IGN Pakistan
But before that, let’s cover our bases. The gaming industry has long served as a vehicle for cultural imagination, reflecting and refracting the social hierarchies of its time. In its early years—dominated by male developers and male-oriented markets—gaming produced recurring tropes: the ‘damsel in distress’, the hypermasculine saviour, and racially caricatured characters. Over time, these narrow portrayals of gender, race, and identity became normalised rather than exceptional.
The same dynamics operate in related media such as anime and manga: rooted in Japan’s post-war cultural imagination, they too dramatise gender ideals and social expectations through exaggerated archetypes. Together, these forms have entertained generations of audiences and woven bias into the fabric of digital storytelling. The common thread linking them—and later, technologies such as AI—is their reliance on existing prejudices, which are then amplified into reproducible, globalised imagery.
Gender stereotypes remain among the most striking in gaming, anime, and manga. Women are often confined to a prescribed set of archetypes: the loyal sidekick, the motivating love interest, the object of desire, or the plot device whose sole purpose is to advance the hero’s journey. These figures are frequently sexualised or idealised, rendered with exaggerated body proportions that cater to the male gaze rather than human diversity, and rarely afforded narrative agency or emotional complexity. Men, by contrast, are typically cast as hypermasculine saviours—stoic, violent, and unwavering—whose emotional range seldom stretches beyond aggression, control, or heroism.
Racial and cultural representation follows similarly reductive patterns. Black characters are positioned in roles as thugs, athletes, or comic relief, while Asian characters are compressed into martial arts specialists or otherwise defined by exoticised traits. Cultural stereotyping extends further: Middle Eastern characters are routinely depicted as terrorists, and Indigenous peoples are romanticised as mystical, primitive, or timeless backdrops for adventure. Religion, too, is not spared; organised faith—particularly the church—is repeatedly framed as corrupt, authoritarian, or villainous, fuelling scepticism towards spiritual institutions and moral authority.
These recurring portrayals shape how audiences perceive gender, race, culture, and belief—both in media and beyond—informing expectations, assumptions, and attitudes in everyday life while limiting the diversity of stories that are told.
Where once stereotypes were learned through play and publication, today they are absorbed at scale by algorithms—automated, repeated, and redistributed with minimal human oversight. Generative AI systems, designed to produce visual content from text prompts, are trained on vast datasets scraped from the internet. Much of this imagery originates from games, anime, manga, and their derivatives—fan art, remixes, and memes—allowing AI models to internalise and reapply the visual logic that has long shaped these cultural landscapes.
Gender bias emerges most clearly in AI-generated images. Simple prompts such as ‘doctor’ disproportionately generate male-presenting images, while ‘nurse’ produces predominantly female-presenting ones. Even seemingly neutral requests frequently sexualise women, echoing the hyper-feminised body types and rigid gender ideals prevalent in media. Racial and cultural distortions are equally pervasive: prompts for ‘beautiful person’ skew towards light-skinned individuals, whereas darker-skinned individuals are frequently coded with markers associated with poverty, violence, or moral ambiguity. Non-Western features and attire are regularly exoticised, and intersectional identities—non-binary, disabled, or plus-sized bodies—are underrepresented or misrendered.
These biases are not accidental; they are systemic. Drawing on datasets imbued with stylised and stereotyped demographics, AI does more than inherit prejudice—it magnifies it. Archetypes are transformed into algorithmic defaults, propagating the same narrow representations at unprecedented scale and speed, and embedding them ever deeper into digital culture.
These algorithmic defaults carry consequences that reach far beyond pixels and code. At an individual level, such representations influence how people understand themselves and others. Those immersed in these media often internalise distorted ideals of identity, power, and belonging—questioning their self-image, feeling their agency erode, or sensing exclusion when they fall outside prescribed norms. For marginalised groups especially, the digital landscape becomes a mirror that misreflects: their identities are not seen in full, but erased, distorted, or reduced to caricature.
Culturally, the repetition of these archetypes constrains how the world is imagined. When the same stories and symbols are reproduced across media—from entertainment to social platforms—envisioning alternatives becomes increasingly difficult. This narrowing of perspective flattens the diversity of storytelling, stifles creative risk, and dulls empathy, leaving audiences steeped in a homogenised worldview that preserves existing hierarchies rather than challenging them.
At a technological level, the illusion of neutrality in AI conceals how bias persists—and even flourishes—within its systems. Trained on data saturated with stylised and stereotyped representations, AI models replicate these social inequities under the guise of objectivity and efficiency. Their outputs—whether in imagery, search results, recommendations, or automated decisions—quietly reinforce societal prejudices, prompting growing concern and renewed calls for transparency, accountability, and inclusivity in the design and deployment of these technologies.
In response to growing calls for accountability, dismantling these stereotypes demands collective effort across creative and technological fields. In gaming, meaningful representation must begin behind the screen—with more diverse writers, designers, and producers shaping stories that move beyond tokenism. Games such as Celeste, Life is Strange, and Marvel’s Spider-Man: Miles Morales show that inclusivity and commercial success are not mutually exclusive; both can thrive when characters and worlds are crafted with authenticity and care.
For artificial intelligence, reform begins with transparency and responsibility. Bias audits should be standard practice in dataset development, ensuring that models do not replicate social inequities while claiming neutrality. Developers can further this by broadening the diversity of training data, demystifying AI decision-making, and enabling users to make ethical, informed choices through thoughtful design and clear prompt frameworks.
Ultimately, lasting change rests on collaboration between creators, consumers, and policymakers. Each plays a role in advancing fairer representation, establishing accountability structures, and ensuring that technological progress amplifies humanity rather than diminishes it.
The stereotypes that course through visual and digital culture—whether in the worlds we play, the stories we draw, or the images we generate—continue to shape how we understand identity, power, and belonging. These representations influence not only how we create and consume but also how we see one another. To confront these patterns is to reclaim the creative and ethical responsibility that technology too often obscures.
Representation, after all, is a form of authorship. And as the boundary between story and system continues to blur, authorship belongs to all of us. Through design, code, and narrative, each act of creation offers a chance to redraw what is possible—and to imagine a world where neither heroes nor villains are bound by stereotype, but defined by the fullness of their humanity.
Preevena Devi pursued Cambridge A Level at Taylor's College before attending Monash University. She is a biomedical science student, a passionate feminist, and a firm believer in the transformative power of the written word to change the world!