I have research interests in philosophy of mind and applied ethics. In philosophy of mind, my work now centers on the questions about the nature of spatial representation across different sensory modalities: e.g. do we always, or even typically, represent space in audition, olfaction, or gustation? If we do, what is the significance of this finding for broader questions in philosophy of mind?
In applied ethics, my focus is on ethics of emerging technologies, especially AI and robotics. I am in particular interested in the ethical and social implications of widespread labor automation, and, more futuristically, about the ethics of human-level AI.
My research in ethics of emerging technologies is focused on issues related to the treatment of artificially intelligent agents equipped with human-level capacities. The potential development of such a powerful technology inevitably raises a host of philosophical issues. To the extent that one believes in an in-principle possibility of human-level AI, the questions of whether it is permissible to build them in the first place, and, if so, what moral and legal rights these beings should have, and what principles should govern their creators, are of vital interest. Failure to accord proper moral status to AIs, and failure to institute appropriate standards for their development carry with them a serious risk of moral harms – such as rights violations and unjust distribution of resources – both to actual human persons and to potential artificial persons. These risks ought to be avoided, first and foremost by investigating what moral rights and responsibilities human-level AIs and their creators have.
There is already some philosophical work being done on the question of how we should conceptualize the relationship between the designers of human-level AI and their creations. Building upon and engaging with that literature, I will be exploring the question of whether it can be modelled on any of the more familiar and more widely discussed relationships, such as, for example, those between parents and children.
Relatedly, I am also interested in the issue of whether the creation of such artificial agents can be commodified. The challenge to companies and business aiming to build human-level AIs arises from (1) the fact that there is a powerful case to be made that these AIs will be persons, in the full sense of the term, and (2) the undeniable moral principle that persons cannot be owned, bought, and sold. I aim to examine whether, given (1) and (2), there is any room for an ethical and commercially viable design of human-level AI.
Philosophy of Mind
In my research in philosophy of mind, I am in particular interested in the nature and scope of spatial representation across sensory modalities (such as vision, audition, and olfaction), and what it can tell us about important features of experience such as the unity of consciousness or the nature of multisensory perception.
At present, I am focused on developing an account of spatial perception in olfaction and in audition. My view is opposed to the more traditional approach in philosophy of perception, according to which auditory and olfactory experiences almost entirely lack spatial content. I argue that the case for the traditional view is made much weaker by recognizing that the notion of spatial content that it relies on is too restrictive. Being more liberal with what types of content count as spatial allows us to arrive at a very different view of how space is represented by hearing and smell.
Chomanski, Bartek. (forthcoming). “If robots are people, can they be made for profit? Commercial implications of robot personhood.” AI and Ethics.
Chomanski, Bartek. (forthcoming). “Spatial Experience in Olfaction: a Role for Naïve Topology.” Mind & Language.
Chomanski, Bartek. (forthcoming). “What’s Wrong with Designing People to Serve?” Ethical Theory and Moral Practice.
Chomanski, Bartek. (forthcoming). “Technological Unemployment without Redistribution: a Case for Cautious Optimism.” Science and Engineering Ethics.
Chomanski, Bartek. (2020). “Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins ‘Critiquing the Reasons for Making Artificial Moral Agents’” Science and Engineering Ethics 26: 3469-3481.
Chomanski, Bartek. (2018). “Balint’s Syndrome, Visual Motion Experience, and Awareness of Space.” Erkenntnis 83 (6): 1265-1284.
Chomanski, Bartek. (2018). “On the Relation between Visualized Space and Perceived Space.” Review of Philosophy and Psychology 9 (3): 567-583.
Chomanski, Bartek. (2017). “What Makes up a Mood Experience?” Journal of Consciousness Studies 24 (5-6): 104-127.
Chomanski, Bartek (2019). “Should there be a right to build AI servants?” Business Ethics in a Digital Age, Harvard University, Cambridge, MA.
Chomanski, Bartek (2019). “Smelling Places: the Spatial Content of Olfactory Experiences” Southern Society for Philosophy and Psychology Annual Conference, Cincinnati, OH.
Chomanski, Bartek (2016). “The Spatial Unity of Experience” (poster) Third Annual iCog Conference: Sense and Space, University of London, London, UK.
Chomanski, Bartek (2015). “Are We Conscious of Our Thoughts’ Locations?” The European Society for Philosophy and Psychology Annual Conference, University of Tartu, Tartu, Estonia.