Project Description

Home / Members / Postdoctoral Fellows / Michael Barnes

RESEARCH AREAS:

  • Social & Political Philosophy

  • Philosophy of Language

  • Ethics of Emerging Technology

CONTACT:

MICHAEL BARNES

Postdoctoral Associate;
Rotman Institute of Philosophy, Western University

Michael Randall Barnes received his PhD from Georgetown University in 2019, where his dissertation was on the topic of subordinating speech. Before that he completed an MA at Carleton University, where he focused on exploitation. It is from this background that he approaches issues in AI ethics, addressing wide-ranging topics such as online hate speech, content moderation and recommendation, automation, fauxtomation, and ghost work.

Before joining the Rotman Institute, Michael was a Postdoctoral Fellow at the Institute for the Study of Human Flourishing at the University of Oklahoma where his research focused on the spread of hate online. And before that, he taught in the Philosophy Departments at Georgetown University, Ryerson University, and the University of Toronto.

My main research is on philosophical accounts of subordinating speech—that is, speech acts that contribute to oppression, including hate speech, propaganda, slurs, microaggressions, and more. This topic sits at the intersection of social & political philosophy and philosophy of language, with healthy doses of applied ethics, feminist theory, and social epistemology found there as well.

Recent events have shifted my focus towards online hate, and the ethics of emerging technology more broadly. In general, my current research takes an expansive approach to analyzing the multiple harms of subordinating speech as it occurs online. More specifically, I next plan to address two ethically important questions that each raise difficult theoretical issues. First: what are the harms of online hate speech, both constitutively and causally, and how might accounts of harmful speech capture these? Second: what is the role of technology in producing (and maybe resolving) the harms of online speech? For part of this second question, I will evaluate the complicated roles that AI and algorithms play in online radicalization at two important junctures: recommendation and moderation. At the same time, I turn a skeptical eye towards Big Tech’s claims about their AI-powered algorithms, and examine the situation of the undervalued human content moderators who clean up what AI cannot yet accurately detect.

Overall, my research bridges the gap between social philosophy of language and data ethics. This is a productive combination as the internet is, at its core, a medium of communication and communicative acts. This is, however, an under-explored gap as most philosophers of language have yet to reckon with the significant shifts the internet and social media bring to our normal practices. My work fills this gap, focusing on the (sometimes invisible) concrete harms emerging technology makes possible, as well as the hard questions of responsibility when screens and algorithms play key roles.

“Hate Speech,” with Luvell Anderson, Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, (2022) (https://plato.stanford.edu/entries/hate-speech/)

“Positive Propaganda and the Pragmatics of Protest.” In The Movement for Black Lives: Philosophical Perspectives, edited by Michael Cholbi, Brandon Hogan, Alex Madva, and Benjamin Yost, 183–210. Oxford University Press. (2021) (https://global.oup.com/academic/product/the-movement-for-black-lives-9780197507773)

“Speaking with (Subordinating) Authority,” Social Theory & Practice 42 (2): 240–257. (2016) (https://www.jstor.org/stable/24871342)

“Exploitation as a Path to Development: Sweatshop Labour, Micro-Unfairness, and the Non-Worseness Claim,” Ethics and Economics 10 (2): 26–43. (2013) (https://papyrus.bib.umontreal.ca/xmlui/handle/1866/9631) [Reprinted in the Journal of the Canadian Society for the Study of Practical Ethics (CSSPE) Volume 1: Practical Ethics – Perspectives and Issues, 158–179. (2017) (https://scholar.uwindsor.ca/csspe/vol1/1/9/)]

Western University:

  • PHIL 2037G: Philosophy and AI (Winter 2022)
  • CS 9147B – ECE 9660B – SS 9940B: AI and Society – Ethical and Legal Challenges (Winter 2022)
  • PHIL 9232B/9234B: Ethical and Societal Implications of AI (Winter 2022)

Ryerson University:

  • Philosophy and Death (Spring 2020)

University of Toronto:

  • Seminar in Applied Ethics: Ethics of Emerging Technologies (Spring 2020)

Georgetown University:

  • Introduction to Ethics (Summer 2017 & Summer 2019)
  • Introduction to Philosophy (Summer 2018)
  • Philosophy of Education (Spring 2017)
  • Ethics of Speech (Fall 2016)
  • Oppression & Justice (Spring 2016)
  • Bioethics (Fall 2015)
Learn About Rotman Trainee Opportunities
Learn About Our Postdoctoral Fellowship Program