Grad Research

Graduate students participate in a rich research environment, with regular department colloquia, productive small workshops, and opportunities to collaborate with faculty. Most students who envision going on to the PhD choose to write a Professional Paper, which usually begins as successful paper from a graduate seminar and is later honed into a writing sample over the course of a term, working with their supervisors. Abstracts for recent professional papers are below, organized by topic.

Incoming students are given a travel budget of $1200. They are encouraged to submit short papers to conferences and/or attend workshops in their areas of interest. Conference/workshop funding for MA students is a new addition to our program and to date students have had overwhelmingly positive experiences using their travel funds.

Professional Papers and Theses


Freedom, Infinitude, and Order in Margaret Cavendish: Mary Purcell 

Abstract: Throughout her work on natural philosophy, Margaret Cavendish (1623-1673) defends the belief that all nature is composed of free, self-moving, rational matter. Cavendish advocates for a robust sense of freedom in which all parts of nature determine their own actions. On her account, parts of nature are free to create irregular motions and disorder. Still, Cavendish esteems nature for the general order and harmony present amongst her free parts. Given the great potential for disorder in her system due to the freedom of nature, Cavendish leaves us asking, “Ought not we to expect nature to be significantly more chaotic?” (Detlefsen 2007, 186). 

In this paper, I argue that, on the infinite scale, Cavendish provides a response. For the whole of nature, balance arises through the infinitude of degrees of qualities, such as softness or thickness, found in the natural world and their ability to prevent what Cavendish refers to as ‘absolute power.’ This account of order in Cavendish also supplements aspects of her anti-atomist beliefs and arguments regarding infinite worlds. I conclude the paper with a discussion on the limitations of Cavendish’s account to address how it does not translate to balance and order in finite parts of nature.

Supervised by Lisa Shapiro

Why Category Mistakes Seem Odd: Curtis Hrusik 

Abstract: I'm going to talk about why sentences like 'The number two is green', 'Some ideas eat dinner', and 'They sleep furiously' seem odd.  These sentences are called category mistakes.  Ultimately I will argue that category mistakes seem odd because, although they are meaningful, they are neither true nor false; and I will argue that this is so because the meanings of their constituent expressions are underdetermined by our use of those expressions:  namely such that, on some ways of making those expressions precise, they are true while, on others, they are false.  This theory of category mistakes is called supervaluationism.  In short then, I will argue that supervaluationism is the best theory of category mistakes.

Supervised by Tom Donaldson

Tight Corners and the Rationality of Moral Action: Abhi Ruparelia 

Abstract: According to the Humean theory of practical rationality, an agent’s reasons for action are fundamentally grounded in her subjective motivational set. Once a leading proponent Humeanism, Philippa Foot has since distanced herself from the view. Her dissatisfaction lay in the theory’s inability to generate agent-neutral moral reasons in “tight corner” situations, i.e., when the demands of morality are at odds with those of desire or self-interest. My goal in this paper is to argue that Foot’s objection does not deal a decisive blow to Humeanism. As I shall argue, the Humean has the conceptual resources to address the problem of tight corners. My argument proceeds in two stages: first, I draw on Mark Schroeder’s (2007) discussion of Hypotheticalism to demonstrate the possibility of general agent-neutral reasons within a Humean framework. Next, I appeal to the Aristotelian notion of the reciprocity of virtues to show that anyone who has reasons to act in accordance with any of the central virtues must inevitably also have reasons to acquire all the other virtues. This conclusion, when stretched to its full extent, paves the way for (quasi-)agent-neutral moral reasons for action. Appealing to these kinds of reasons, I argue, solves the problem of tight corners for the Humean.

Supervised by Bruno Guindon

An Engineering-Inspired Account of Knowledge: Iman Ferestade 

Abstract: The question of how scientists obtain knowledge from running computer simulations plays an integral role in the epistemology of modern science.  In the literature on computer simulations, it is assumed that knowledge about the results of computer simulations is grounded in the knowledge of the process through which computer simulation results are achieved.  On that basis, two widely recognized accounts are developed:  an internalist account, which holds that computational simulations are just computer-aided arguments, and an externalist account, which holds that computational simulations are a kind of experiment in their own right.  In this paper, I introduce an engineering-inspired account of computational simulation knowledge that transcends the internal-external dichotomy.  I show that if the model entered into the simulation is well-conditioned, and the residual resulting from plugging the computed solution into the defining equation is sufficiently small, we can claim to know that the computer-generated solution is approximately true.  Checking the residual is an internal component because it is calculated by computer users to give justification for the computed solution, whereas well-conditioning is an external component because it is merely a property of the model used in the simulation and is, therefore, beyond the control of computer users.  Based on this account, knowledge can be gained through computer simulation even if the simulation process or the nature of computer simulations remains opaque, since the method of justification can be different from the method of discovery.  In conclusion, I highlight the fact that computational simulations are more akin to argumentation than arguments.

Supervised by Nic Fillion

Mackie on Objective Values: Mete Gencer 

Abstract: One of the most infamous skeptical arguments in ethics is J. L. Mackie’s argument from queerness. Over the last fifty years, by and large the most defended, assumed, and taught interpretation of the argument has remained that objective values are so strange that they cannot exist. Here, I argue that this interpretation is false; it contradicts the major corpus of Mackie’s work, including his Ethics (1977), where he advances the argument. For Mackie, there is a live possibility that objective values exist, a possibility surprisingly actualized only with a divine entity. After arguing against the standard interpretation, I show that the interpretive remedy lies in taking seriously Mackie’s overlooked target: the eighteenth-century British moralists. I conclude by pointing out that the sort of blunder made by philosophers here plagues moral philosophy even today

Supervised by Bruno Guindon

Studies of temporal phenomenology won't tell us whether time passes: Elliot Schwartz 

Abstract: Two recent empirical studies, Latham et al. (2020) and Shardlow et al. (2021), aim to help resolve the debate between dynamic and static theories of temporal metaphysics by testing whether our temporal phenomenology could form part of an ‘argument from experience’ for the dynamic theory. To determine whether subjects had the phenomenology in question, the authors measured levels of agreement with linguistic descriptions of experience (e.g., “I see time passing”, “It feels to me like the present moves”). I argue that any study relying on linguistic descriptions of phenomenology to resolve the debate between dynamic and static theories is metamethodologically flawed; any linguistic description of the phenomenology in question is either too vague or too question-begging to have theoretical utility in adjudicating the metaphysical dispute. I begin by arguing for two methodological injunctions that linguistic descriptions of experience must obey to provide evidence that our phenomenology is suggestive of a dynamic theory. First, linguistic descriptions should not conflate experiences of things changing with experiences of things as changed. Second, linguistic descriptions should not conflate direct experiences with beliefs about metaphysics derived from such experiences or beliefs about which metaphysical theories this experience favors. I argue that Latham et al. (2020) and Shardlow et al. (2021) used ambiguous linguistic descriptions that violated both these injunctions and therefore neither study provided evidence as to the existence of passage phenomenology. I then show that no study could generate linguistic descriptions that resolve these ambiguities without presupposing metaphysical views about the precise manner in which the flow of time is realized in our phenomenal experience, thereby begging substantial metaphysical questions. I conclude that the argument from experience can be neither confirmed nor disconfirmed using experimental philosophical methods.

Supervised by Holly Andersen

Defending (Behavioral, Retributive) Anger: Winston Meier 

Abstract: What role, if any, should anger play in political movements? The answer to this question often hinges on a set of other questions, including: Can the feeling of anger be isolated from angry behaviors—riots, rebellions, and general “uncivil disobedience?” Is anger necessarily retributive—is it always characterized by a vengeful spirit? A plausible route to defending anger, then, is to answer “yes” to the first question and “no” to the second, while claiming that the material gains of anger outweigh its less desirable qualities. This paper identifies a kind of anger that is behavioral, retributive, and defensible. I argue that anger can be a function of an incomplete stress response (ISR)—a bodily phenomenon that was first identified by Pierre Janet (1859-1947) and that has recently become a topic of interest in trauma psychology. In these cases, the desire for revenge can be explained by the body’s continued readiness to fight (rather than flee), even in the absence of an immediate threat. In the context of political oppression, riotous behaviors can be explained as an attempt to complete said stress response. As I will argue, there are cases where such behavioral anger may be necessary for both restoring a victim’s agency (such that it is incapacitated by an ISR) and for undermining political domination.

Supervised by Nic Bommarito


Distinctively Mathematical Explanations of Classical Dynamical Systems: Mateo Ochoa Coloma 

In classical mechanics, forces are the fundamental component of dynamical systems—without forces, a system would not be able to be classified as dynamical. Moreover, dynamical systems are distinguished from one another by the forces at play in each system. These two considerations can lead us to reasonably expect that the features that dynamical systems exhibit are explained by the forces at play in each of them. Furthermore, we can also reasonably assume that all explanations where forces do the explaining are causal explanations; after all, forces are causes par excellence. But if we assume that explanations that cite forces are causal explanations, then it follows that all explanations of features of dynamical systems are causal explanations.

In this paper I argue that this is not so. I argue that there are what Lange (2016) calls distinctively mathematical explanations of features of dynamical systems, which means that not all explanations of the features of dynamical systems are causal. I do so by examining the role that modelling has on how we explain certain features of dynamical systems, and by distinguishing two ways in which a model is said to hold of the system.

Supervised by Holly Andersen

Supererogation or Shupererogation?  An Epistemic Account of the Intuition Behind Heroic Acts: Kesavan Thanagopal 

Abstract: Agents who perform heroic acts often that they had acted in the way that they did because they felt that they were required to perform those acts at that moment in time.  Yet, proponents of the concept of supererogation - acts which go "above and beyond" the call of duty - customarily dismiss the testimonies of these heroic agents, positing that these individuals are simply mistaken about their moral requirements; after all, heroic acts like protecting Jews from Nazi persecution during WW11 or runnning into burning buildings to save individuals trapped in the fire fall within the category of supererogatory acts, and such acts are supposed to be truly optional, and not morally required.  Such a dismissal, however, appears to commit proponents of supererogation to a rather unpalatable position:  that many of our moral heres are, in fact, morally confused.  This paper attempts to propose an alternative, epistemic framework to elucidate the intuition behind those heroic acts - an account that not only takes seriously the testimonies of these heroic individuals so as to avoid such a drastic conclusion which insists that moral heroes are often morally confused, but also one in which the notion of supererogation could be entirely circumvented.

Supervised by Chelsea Rosenthal

Defending Historical Constructivism Against Trilemma Objection: Xinyu Xu 

Abstract: In his paper "The End of Historical Constructivism", Joshua Glasgow presents what has come to be known as hi trilemma objection to historical constructivism, arguing that any account of historical constructivism would inevitably succumb to one of three major problems:  either it would be undermined by the challenge posed by the circularity problem, or if it doesn't, then it would fail to address either the challenge posed by the redundancy problem, or that of the indeterminacy problem.  In response to Glasgow, Esa Diaz-Leon defends historical constructivism about races, providing a means by which historical constructivism can evade this trilemma objection.  In this paper, I will argue that despite Diaz-Leon's best effort in evading the trilemma objection, her proposal nevertheless faces one of the three major problems at any one point in time.  While this may seem to suggest that historical constructivism is false, I will further contend that the indeterminacy problem could itself be an illegitimate concern, and as such, by disregarding this problem, it would be possible for one to adequately defend historical constructivism just by facing the challenges posed by the circularity and the redundancy problems.

Supervised by Jennifer Wang

In Praise of Being Just This, Merely This: Jing Hwan Khoo 

Abstract: In thinking about what meaning in life consists in, there are at least two models of meaning that we might be familiar with. According to what I call “the Project View,” a meaningful life is one that is committed to a ground project. In contrast, according to “the Momentary View,” a meaningful life is one that is lived in the present moment. In contrasting these two views, it might be tempting to think that the latter model is somehow too trivial and frivolous to address the existential concerns from which the problem of meaning arises in the first place. In this paper, I defend the Momentary View from such worries by showing how this model offers an unforeseen existential payoff: living in the present moment, insofar as it involves an appreciation and celebration of what is good in the world, involves a distinct kind of meaning that connects us to something beyond ourselves. Furthermore, I argue that this kind of meaning is foundational to the kind of meaning we find in projects, insofar as the value of our projects is often derived from the value of the meaningful moments that such projects enable. Underlying the distinction between both views are thus two fundamentally different stances we can adopt towards what is good in life, both of which are equally legitimate and crucial to meaning.

Supervised by Chelsea Rosenthal

Epistemic Nudge, Testimony, and A Fuller Picture of Communication: Jingyi Wang 

Abstract: Popularized by Thaler and Sunstein (2008), the notion of nudge refers to a technique for steering people's decision in a particular direction while allowing them to go their own way.  Nudge has been brought into epistemologists' attention, and adapted into a notion of epistemic nudge:  the intervention that steers people's inquiry in a particular direction but also allows them to go their own way.  Much has been written on conceptions of and normative concerns about epistemic nudging, still, one key component that is lacking in the current literature is a systematic comparison between epistemic nudge and other familiar notions in epistemology.  This paper aims to fill the lacuna by conducting a comparison between epistemic nudge and testimony.  At first glance, epistemic nudge and testimony appear to be categorically distinct: In offering testimony that p, the testifier is typically assumed to explicitly assert that p.  By contrast, in making the hearer believe that p, a nudger would refrain from such an outright assertion, and work on other contextual features or implicit contents.  This paper, however, aims to argue that nudge and testimony are much closer than we first take them to be.  Moreover, by bringing out the affinity between epistemic nudge and testimony, learning about norms governing one enables us to learn about the other, and also to learn about norms of communication in a more holistic way.

Supervised by Endre Begby

The Ethics of Emotional Dependence: Mengfei Lu 

Abstract: In day-to-day life, we often think that being emotionally independent and self-sufficient is an important capability that allows us to achieve our well-being. Emotional dependence means that we are weak and needy. Some Neo-Aristotelian and Neo-Stoic philosophers too, suggest that to seek others’ support is to place emotional burden on them. In this paper, I argue against such views. I show that emotional dependence is important for our well-being: dependent relationships provide the context of a wide range of goods that enable us to live well, with one important kind being the development of emotional regulation.

In distressful situations, being cared for by our friends or families alleviates our negative emotions and restores our agency. The sense of security they provide promotes our confidence and self-esteem. In addition to these, by having a reliable emotional support network, we also develop the ability to regulate our emotions. We modify our emotional predispositions and advance our coping mechanisms. These enable us to handle distressful situations ourselves. Emotional dependence, in turn, develops a sort of independence in us – with our close ones as our backup, we are able to tackle different kinds of tasks and to thrive in life. As we become better at self-care, we also become better carers for our close ones. Our relationships flourish. Emotional dependence, therefore, brings us important goods that we could not acquire through other means.

Supervised by Nic Bommarito

Some Ruminations on Doing Ethics Ethically: Jenna Yuzwa 

Abstract: What is the role of an ethical theory?  What standards should it meet?  The answers to those questions reveal much about our methodology and approach to ethics.  Yet, what sort of assumptions do these answers rest on and what implications do they have?  These are crucial yet frequently neglected questions.  One area of the literature that sheds light on them is found in the literature regarding supererogation and virtue ethics.  In recent decades, there have been some attempts to demonstrate that virtue ethics can accommodate supererogation, while others have denied that the former is reconcilable with the latter.  Regardless of the stance philosophers take, they share a common assumption - that one central objective of an ethical theory is that it must be capable of justifying and explaining the moral status of a given act - the moral status informed by our intuitions.  Aristotle's view on virtue is often taken to be the paradigmatic example of a virtue ethical theory (Crisp 2013; Heyd 2015; Stangl 2016; Wilson 2017; Vaccarezza 2019).  Yet his primary concern was not with justifying and explaining our intuitions about the moral status of a given act; rather he was concerned with what is involved in living a good human life.  To expect that Aristotle's account of virtue meet the expectation of justifying and explaining our convictions about the moral status of an act not only imposes an aspect of our modern conception of ethics on his thinking thereby distorting his view, but this expectation more broadly also carries the very real risk of facilitating unethical rather than ethical behavior.

Supervised by Chelsea Rosenthal

Epistemic Oppression in Healthcare:  Lessons from Chronic Fatigue Syndrome: Aaron Mascarenhas 

Abstract: The project of epistemic justice in healthcare seeks to identify epistemic harms constituted by persistent and unwarranted infringements upon the patient's epistemic agency. Kidd & Carel’s (2014) framework that extends Miranda Fricker’s concept of epistemic injustice to identify epistemic harms endured by patients has become increasingly popular. This paper will focus on Blease et al.’s (2017) use of this framework to identify epistemic harms endured by patients with chronic fatigue syndrome during the 2000s. During this time, the available evidence on CFS was impoverished. Thus, anyone who had to rely on this evidence to understand the illness faced epistemic difficulties. I will argue that this exculpates physicians of culpable epistemic and moral wrongdoings implied by Fricker’s concept of epistemic injustice. Thus, it will turn out that the epistemic harms described by Blease et al. (2017) fall outside the purview of Fricker’s epistemic injustice. However, I will show that where Fricker’s concept falls short, we can rely on Kristie Dotson’s conceptualization of epistemic oppression to characterise epistemic harms experienced by CFS patients during the 2000s. The broader point is that even if Fricker’s concept of epistemic injustice falls short in some cases, there might still be room for epistemological critique that allows us to identify the epistemic harms incurred by patients. Thus, the project of epistemic justice in healthcare can stand to broaden its theoretical horizons beyond its current myopic focus on Fricker’s epistemic injustice. 

Supervised by Endre Begby

On the Counterfactual Implicature of Subjunctive Conditionals: Milos Mihajlovic 

Abstract: In this paper I will address the following problem: why counterfactual implicature of subjunctive conditionals does not arise in some cases? Often it seems natural to assume that the antecedent of an asserted subjunctive conditional is false (e.g. “If Stalin had not signed the pact, Hitler would not have invaded Poland”). But in some cases it seems that an assertion of a subjunctive conditional does not signal that its antecedent is false (e.g. “If Jones had taken arsenic, then he would have exactly those symptoms that he has now” (Anderson, 1951)). There are different explanations of this kind of Anderson-style cases. I will claim that two general strategies taken by von Prince (2019) and Khoo (2022) cannot fully explain why there is no counterfactual implicature in such kind of cases, and that their theories thus need to be amended. In addition, I will suggest that an adequate analysis of such cases would need to focus on whether the antecedent (or its negation) is already presupposed to be true in the context of a conversation at the moment before the assertion of a subjunctive conditional was made. I will also question to what extent it is natural to expect in general the counterfactual implicature of subjunctives in a conversation. 

Supervised by Nic Fillion

A Quizzical Case of a Quantity Without a Quality: Aaron Richardson 

Abstract: There are three properties of light (intensity, wavelength, and polarization), yet humans are only sensitive to the first two. In the human case, differences in intensity are perceived as differences in brightness and differences in wavelength are perceived as differences in colour (more accurately, hue). Differences in polarization, however, are not perceived at all. In short, it is a quizzical case of a quantity without a quality. The goal of my paper is to demonstrate is that there is no such quality. I do not mean to say that there is no quality whatsoever that accompanies polarized light, but that there is no quality which necessarily accompanies polarized light. Whatever it's like to see polarized light is entirely dependent on whatever perceptual task the animal uses it for. Bringing that back to the case of colour and brightness, while it is true that humans perceive differences in wavelength and intensity as differences in colour and brightness, that is only contingently true. We can generate two conclusions from this. Firstly, whatever colour and brightness are, they are not fundamental phenomenological entities. They themselves are the result of whatever high-level functional process wavelength and intensity play a role in. Secondly, there is a very real sense in which "what it's like to see polarized light" could be colour. Importantly, whether this is or is not the case is not decided prior to scientific investigation, but can only be revealed through a scientific and functional analysis of whether polarized light could be used to build a representation of the world as coloured.

Supervised by Kathleen Akins

Conspiracy Thinking As Hyperactive Agency Detection: Evan Eschelmuller 

Abstract: Conspiracy theories provide a unique example in which our normal information processing and knowledge acquisition systems can be overridden by a hyper-focus on agent-based explanations for world events. People prone to conspiracism are often viewed as irrational, gullible, or just plain stupid, both by laypersons, and sometimes by philosophers. This view is also espoused by some (but not all) in the philosophical community. In truth the issue is much more complex. There are both psychological (e.g., individual-level predispositions) and social-structural factors contributing to conspiracy thinking. Hyperactive agency detection, while discussed somewhat in the empirical literature, is underexplored in philosophy as a causal explanation for conspiracy belief. Incorporating distinctive psychological traits — such as hyperactive agency detection — to explain conspiracy thinking can help us understand this phenomenon in a deeper way.

Supervised by Endre Begby


Believing While Inquiring: Yan Chen

Abstract: In a recent series of papers, Jane Friedman (2013a, 2017, 2019a, 2019b) proposes a norm of inquiry called “Don’t believe and inquire” (DBI), according to which one is rationally required to suspend judgment on p while inquiring into whether p, because having an outright belief in p is inconsistent with having any inquiring attitudes towards p. In this paper, I argue against the DBI norm. In particular, I present counterexamples about mathematical and scientific inquiries where researchers continue to believe that p while putting p under further inquiry. I argue that contrary to DBI, sometimes inquirers are rationally required to believe that p for the sake of their inquiry into whether p. Further reflection on what goes awry in Friedman’s DBI norm in particular challenges her larger project of differentiating between outright belief and degrees of belief with reference to DBI.

Supervised by Endre Begby

Grounding and the Explanatory Role of Laws: Ian Kahn

Abstract: The governing conception of laws of nature holds that laws are efficacious in guiding or producing events that happen in the world. While the metaphor of governing is intuitively appealing, it is silent on what kind of metaphysical relation obtains between those laws and events. Nina Emery (2018) posits a plausible candidate for the relation in question: metaphysical grounding. She argues that laws ground certain sequences of events that constitute their instances. I argue against this view by raising the general concern that a law’s role in explaining events—explaining why those events obtain—appears to get “crowded out” by causal explanation on the one hand, and grounding explanation not involving laws on the other. However, I suggest that once we more accurately locate the explanatory role of laws in scientific explanation two alternatives emerge: laws ground causal relations between events that constitute their instances, or laws ground conditional statements relating those events. I conclude, though, that both purported grounding claims are consistent with a non-governing, Humean conception of laws, and so they cannot support a governing conception of laws. Thus, we need to look elsewhere to capture the unique metaphysics of governing laws.

Supervised by Holly Andersen

Believing Kant's Supreme Principle of Reason: Daniel Polillo 

Abstract: Throughout the first critique, Kant offers us no shortage of principles for us to follow in our scientific reasoning. The highest of all principles of reason is the Supreme Principle of Reason, which is the claim that "[W]hen the conditioned is given, then the whole series of conditions subordinated one to the other, which is itself unconditioned, is also given (i.e., contained in the object and its connection)" (A307–8/B364). While Kant scholarship has been giving this principle more attention as of late, one question that I maintain has been inadequately addressed is the question of what propositional attitude is the appropriate attitude to take towards this kind of guiding principle. While this may seem like an odd question to ask within Kant’s system, Kant actually does provide a taxonomy of propositional attitudes in a short, mostly overlooked chapter very late on in the First Critique, called "On Having an Opinion, Knowing, and Believing". In order to answer this question I partially adopt Willaschek’s reading of the Supreme Principle from his 2018 Kant on the sources of metaphysics : the dialectic of pure reason. While Willaschek argues that we ought to read Kant as adopting a strictly neutral attitude towards the Supreme Principle, I argue that the most appropriate way for us to read Kant on this question is that the appropriate propositional attitude for us to have towards the Supreme Principle is one Kant calls "doctrinal belief". 

Supervised by Dai Heide

The Use of Algorithmic Tools in the Criminal Justice System: Hesam Mohamadi 

Abstract: Algorithmic tools which are built based on statistical models are commonly used as a risk-assessment measure for sentencing, bail, and – more importantly for the sake of this paper – parole decisions. While many argue that the use of algorithmic assessment tools (hereinafter, “AATs”) in parole decisions may be problematic, the obligations of the state, given that they would use these tools anyway, remain underexplored. In this paper, I argue that the state has a duty to use AATs for the purpose of identifying the individuals who are less likely to be granted parole to offer them extra services (like job-training plans, anger-management programs, etc.). In the meantime, the fund for these extra services should be supplied from the resources that otherwise would have been given to non-incarcerated individuals. My argument is constituted of three steps. In the first step, I draw attention to the state’s duty to accelerate the release of the incarcerated individuals. In the light of this duty, the individuals who seem less likely to be granted parole must receive extra resources that enable them to fulfill the conditions of release. In the next step, I explain that AATs can be used for singling out the individuals who are less likely to be granted parole – many of the individuals that the tool identifies as “high-risk” before the eligibility period are the ones who will be identified as “high-risk” by the same tool at the time of the parole decision. In the third step, it is argued that the fund for extra services to be offered to the “high-risk” should be deducted from the resources that otherwise would have been spent on the non-incarcerated citizens because the non-incarcerated citizens–in comparison with the incarcerated individuals–have a more substantial role in determining the situation in which one group lose some resources, namely, by legitimizing the use of AATs in parole decision. Essentially, this paper argues that the burden of using AATs in the criminal justice system ought to be shifted from the so-called “high-risk” individuals to the non-incarcerated individuals, and my proposal can be seen as a practical method for shifting some of this burden.

Supervised by Chelsea Rosenthal

Romantic Discrimination:  A Communicative Analysis: Hung Nguyen-Le 

Abstract: Excluding someone as a romantic partner on the basis of their race seems like wrongful discrimination. But what makes it so? In this paper, I examine two prominent accounts that seek to explain the wrongful and discriminatory nature of this phenomenon and demonstrate where they fall short. Specifically, I aim to show that both of these approaches are misguided in applying a distributive analysis, and that they are not accurate enough to prevent false positives. By the end of the paper, I will propose a different account that makes use of a communicative analysis in order to track the wrongness of race-based romantic exclusion. On this view, the phenomenon’s wrong-making feature will be shown to reside in the demeaning message that it sends to the victim.

Supervised by Chelsea Rosenthal


Don't Let Me Be Misunderstood: Sarah Paquette

Abstract: The standard interpretation of Hume on beliefs acquired through testimony reduces testimony to perceptual evidence, resulting in an account of belief formation and decision making about moral matters that fails to account for the passions and character of those involved, as well as how social relations and sympathy bear influence. I argue that this is a problem absent from Hume’s original work, as he draws a distinction between moral matters and matters of fact and presents a nuanced descriptive account.

Supervisor: Lisa Shapiro

On Rosen's Bridge-Law Non-Naturalism: Moral Laws and Weak Supervenience: Reza Abdolrahmani


This paper examines Rosen’s bridge-law non-naturalism, the view that an adequate explanation of why a given moral property is instantiated contains (i) natural properties and (ii) a moral law that bridges these two sui generis properties. I will argue that Rosen’s account of moral laws faces three problems. (1) His account of moral laws could allow an implausible explanans to be cited in explaining why an entity has a given moral property (the problem of explanation-conduciveness). (2) Moral laws do not contain a required inner necessity between moral and non-moral properties (the problem of lack of inner necessity). (3) The bruteness of moral laws, per Rosen, makes them unknowable, while they presumably must be epistemically accessible in order to be action-guiding. I will show that the way in which Rosen addresses this issue is defective (the problem of epistemic access). As will become clear, these three problems together put Rosen’s account of non-naturalism under substantial pressure.

Supervised by Dr. Tom Donaldson

Provoking Change: Debunking the Provocation Defence for Murder: Lauren Perry


Provocation is a partial defence, which if successful reduces the charge of murder to manslaughter. The law treats a killing as provoked if the defendant killed because they lost control in anger caused by the victim’s wrongful act or insult. The defence is controversial: many feminists argue the defence should be revised or abolished, because it perpetuates undesirable masculine norms of violence and anger expression towards women. Others, while sympathetic to feminist arguments, defend provocation on the ground it acts as a “concession to human frailty.” This paper offers two debunking arguments against the “concession to human frailty.” The first is historical: provocation emerged not as a concession to human frailty but as mitigation for aristocratic men who acted to preserve their honour. Second, I argue the contemporary defence, despite a reconceptualization in terms of “loss-of-self-control,” functions analogously to the historical defence. Provocation should be abolished: it is a not a concession to human frailty, but mitigation for those who kill to preserve their social and moral status in an unjust system.

Supervised by  Dr. Evan Tiffany

Knowing When a Gnomon Is Not Working: A Complementary Scientific Approach to Eratosthenes' Calculation of the Earth's Circumference: Cem Erkli


According to the model-based epistemology of measurement, measurement is justified by calibration models that allow a user to go from an instrument indication to the measured value of a quantity (Tal, 2017). In this paper, I explore the ways in which the measuring instruments themselves, as opposed to calibration models, limit certain inferences about the value of a quantity. A specific case of this limitation is seen in measuring instruments that are physical models, since the similarity of the model to its target system can determine whether an inference is justified or not. To argue this point, I apply the contemporary literature on physical models (e.g. Weisberg 2012, Sterrett 2001) and measurement to the historical case of Eratosthenes' (276 - 194 BC) calculation of the earth's circumference. In the end, I suggest a broad framework for analyzing the epistemic role instruments play in science based on Nancy Cartwright's nomological machines.

Supervised by  Dr. Holly Andersen

Credibility as a Distributive Good: Varsha Pai


The credibility that one is given plays a central role in determining the degree to which one is allowed to participate in the production and distribution of social goods. Miranda Fricker (2007) sheds light on the injustice of credibility deficit in her conception of testimonial injustice. This conception, however, views credibility excess as irrelevant to the injustice in question. In this paper, I investigate features of credibility as a good and argue that it falls within the purview of distributive justice. In doing so, I demonstrate that excesses of credibility do create corresponding deficits, making them relevant to the question of testimonial injustice. Accordingly, I offer an understanding of testimonial injustice that accommodates the distributive nature of credibility while being consistent with Fricker's broader aims.

Supervised by  Dr. Endre Begby

Can't Put It Into Words? Put It Here: A Framework for Modelling Inexpressible Information: Somayeh Tohidi


In this paper, I introduce a framework called Syntactic Probabilistic Aumann Structures (SPAS), which is an adapted version of a framework widely used by economists, to provide a novel perspective for addressing some problems in social and formal epistemology. I am going to argue that this framework is superior to a mainstream framework like standard Bayesianism for modelling situations in which inexpressible information is involved. In single-agent scenarios, learning experiences that cannot be captured by a set of sentences may impede the process of updating. I will show how thinking about these situations in terms of SPAS enables an agent to update their credence function using inexpressible information. This possibility emerges from the resources that SPAS has for modelling pieces of information as non-linguistic entities. In multi-agent scenarios, on the other hand, information that is inexpressible in the common language of the agents can cause them to disagree on the credence that they assign to an event. The question is how each agent should update their credence function upon learning the other agent's credence, considering the fact that direct conditionalization on the credence of others is computationally very difficult. In SPAS, agents can indirectly exchange their private inexpressible information simply by announcing their credence towards a certain sentence to each other. So, they can update their credence function by conditionalizing on the information that they have inferred from an announced credence, instead of conditionalizing on the announced credence itself. This possibility stems from the resources that this framework has for distinguishing between the pieces of information that can be learned (with certainty) and other pieces of information. I will show the relevance of this result to the problem of 'peer disagreement' by showing that agents with asymmetric information can be considered as peers according to the recent, less controversial notion of peerhood.

Supervised by  Dr. Nic Fillion

On Regan’s Solution to the Moral Problem of Animal Predation: Lutfi Shoufi 


n my paper, I aim to critique Tom Regan’s response to the moral issue of animal predation. Contrary to his ambition of showing that we do not have a duty to intervene with animal predation, Regan’s response actually leaves us with such an obligation. In fact, Regan’s response leads to what many would consider to be a radically counter-intuitive obligation: we ought to intervene on behalf of the predator in many predator-prey conflicts. As we will see, Regan's response to the moral issue of predation is importantly built on the putative competence of wild animals. But if my argument succeeds, I would have shown that the burden is on Regan to either come up with a plausible conception of competence, which I suggest he has not offered us, such that all wild animals would be competent in their conflicts or come up with a new, non-competence-based, strategy in order to resolve the moral issue of predation. At best, Regan's solution is incomplete. At worst, Regan's solution is self-undermining.

Supervised by  Dr. Chelsea Rosenthal

The Only Possible Interpretation of the Only Possible Argument: Schuyler Pringle


In the paper, I outline and defend a widely underappreciated reading of Kant’s The Only Possible Argument, according to which, the existence of the simple, unique, and absolutely necessary being, which Kant thought to materially ground all possibility, follows naturally from Kant’s definition of absolute necessity – that somethings opposite, its nonexistence, should cancel all possibility whatsoever. Since, according to Kant, the internal possibilities of all things presupposes some existence or other, all possibility may therefore be said to presuppose some existence – the existence of one or many things – that itself actually instantiates the complete set of the most fundamental predicates. If all predicates of all possible things are grounded in some existence that actually instantiates the complete set of the most fundamental predicates and so thereby grounds all possibility, then the cancellation of this existence would cancel all possibility whatsoever. According to Kant’s definition of absolute necessity, such an existence is absolutely necessary. And, as I will explain later in the paper, if something is absolutely necessary, then it must also be both unique and simple. Thus, it simply follows from Kant’s definition of absolute necessity that there exists a simple, unique, and absolutely necessary being which grounds all possibility. I then go on to explain what motivates Kant to adopt this particular definition of absolute necessity, and, in doing so, I demonstrate that a widespread assumption commonly held within the literature is, in fact, false – Kant did not accept something similar to the axiom of S5; Kant did not think that whatever is possible is necessarily possible.

Supervised by  Dr. Dai Heide

An Ill-Natured Neologicism:  Bob Hale on the Modal Epistemology of Mathematics: Paul Xu


Neologicists believe that all truths of pure mathematics are “analytic”, in the sense that they are logically entailed by definitions. Since definitions in mathematics are typically thought of as necessary truths, neologicists have an interest in discussing modal mathematical truths, truths like “2 is necessarily a number”. Surprisingly, however, neologicists have said very little in this regard. In fact, the only neologicist who has discussed this issue at length is Bob Hale, 3 who attempts to fill this lacuna by appealing to the essences or natures of mathematical objects. In this paper I will discuss how neologicists may make sense of modal mathematical knowledge by examining Hale’s essentialist proposal. I aim to show that Hale’s essentialism carries unacceptable epistemological consequences for neologicism, and I will recommend a promising alternative in trivialist platonism. 

Supervised by  Dr. Tom Donaldson

On the Syllogistic Character of the Aristotelian Enthymeme: Ali Taghavinasab


For more than two thousand years, the Aristotelian enthymeme was partly defined as a kind of syllogism (sullogismos) in which one of the premises is usually suppressed. However, in the last century this doctrine has been rejected as a complete misunderstanding of Aristotle, which has little practical use. In this paper, I try to defend the syllogistic character of the enthymeme and the idea that the Aristotelian enthymeme should be understood within his logical theory. In the first part of the paper, I will argue that only this doctrine sufficiently distinguishes Aristotle’s conception of the enthymeme from that of the authors of rhetorical manuals before him, as well as non-specialist authors. In order to show this, I highlight two salient features of Aristotle’s concept of enthymeme and argue that only the syllogistic interpretation can account for these features. In the second part, I will argue that the syllogistic interpretation of enthymeme provides a straightforward explanation of the similarities and dissimilarities between rhetorical arguments, on the one hand, and demonstrative (apodeixis) and dialectical arguments on the other hand. I am not claiming that the syllogistic doctrine of the enthymeme provides a comprehensive account of the Aristotelian enthymeme – many of its rhetorical characteristics, such as the role of emotions, lie outside the scope of the paper. My claim is that every satisfactory account of Aristotelian enthymeme should take into account its syllogistic structure. Aristotle’s redefinition of enthymeme in term of his logical doctrine is a part of his project to highlight the cognitive and rational sides of the art of rhetoric.

Supervisors: Dr. Evan Tiffany, Dr. David Mirhady (Humanities, SFU)
Second reader: Dr. Sarah Hogarth Rossiter

Paradigms, Poles, and Higher-Order Vagueness by David Rattray


Central to any conception of vagueness in natural language is a conception of borderlineness. Usually, our reluctance to positively or negatively answer the question ‘Does a count as F?’ is characteristic of a being a borderline case of F. The traditionalist thinks that the existence of borderline cases is definitional of vagueness: a predicate is vague just in case it admits of borderline cases. This leads the traditionalist to think of vague predicates as resisting the existence
of sharp boundaries. Because vague predicates permit borderline cases, they do not effect a sharp division between their positive and negative cases — there is a third class of borderline cases buffering the two. A threefold division, while helpful at first blush, runs into trouble when we furnish the theory with the means to express borderlineness. This is usually accomplished in one of two ways: we can add a formal surrogate of the notion of determinacy (clarity, definiteness) and identify borderline F cases as those that are not determinately F and not determinately not-F; or, we can introduce a formal equivalent of borderlineness directly. The problem is that just as a vague predicate admits of borderline cases — and thus resists sharp boundaries — we expect borderlineness to admit of borderline cases as well. That is, there shouldn’t be a sharp division between the clear F cases and the borderline F cases, nor between the borderline F cases and the clear not-F cases. Should we effect a five-fold division by introducing “second-order” borderline cases, the above reasoning iterates: we expect “third-order” borderline cases to blur any new boundary, which in turn will require borderline cases of their own. Rinse and repeat. The traditionalist calls the requirement that a vague predicate’s boundaries be blured by borderline cases the phenomenon of higher-order vagueness, and expects it to be accomodated by any successful theory of vagueness. Following others, I call this ‘hierarchical higher-order vagueness’, as it forces us up through a vertiginous hierarchy of multiplying borderline cases.

Supervised by Tom Donaldson


Justifying the ‘Yuck Factor’: Disgust Revisited by Cody Brooks

Just how moral is moral disgust? In the past decade we appear to have crept steadily towards consensus on this question. According to a growing number of philosophers, disgust is irrelevant to our moral thinking and, when relied on, we run the risk of being led severely morally astray. In this paper, I subject this growing consensus to scrutiny by challenging an argument against moral disgust that has been made by Daniel Kelly. In perhaps the most thorough take-down of the emotion currently available, Kelly argues that moral disgust is unreliable as an indicator of moral truth and is irrelevant to moral justification. In what follows, I suggest that even if disgust is an unreliable path to moral truth, some reliance on the emotion may still be justified – specifically if one is of low socio-economic standing.

Supervised by Evan Tiffany

Agency in the Natural World by Simon Pollon (PhD Thesis)

Human agency, like our other traits, is likely continuous with that of other organisms that have evolved on this planet. However, Modern Action Theory has focused almost exclusively on the agency of human beings, so it is not obvious how agency should be understood as a more deeply and broadly distributed, or more basic, biological type. The central aim of this thesis is to fill this gap by developing and defending an account of what I’ll call “Biologically Basic Agency.” In this work’s first chapter, I establish a preliminary set of adequacy conditions drawn from broad consensus in Modern Action Theory and the needs of biological categorization. These adequacy conditions are then amended and supplemented over the course of the subsequent three chapters via critical discussion of two accounts of Biologically Basic Agency attempting to meet the adequacy conditions developed in Chapter 1. In Chapter 2, I show that Tyler Burge’s (2010) account of Primitive Agency cannot be empirically refuted and therefore is trivial. In the third chapter, I argue that Kim Sterelny’s (2003) account of the Detection System cannot serve as the evolutionary precursor to agency, because the kind of general evolutionary story Sterelny desires is empirically implausible. In the fourth Chapter, I continue my discussion of Sterelny’s Detection System, because his basic idea that the simplest adaptive behavioral systems are ‘”feature (or signal) specific” is deeply intuitive and popular amongst Philosophers and Cognitive Scientists. Focusing on the behavior of simple model organisms, I argue that, contra Sterelny and this intuition, these organisms move themselves through their environments toward a best overall place to be within one’s environment relative to a number of (often competing) environmental features relevant to the biological needs of these organisms (typically utilizing sensory inputs corresponding with these various features of the environment). I call such behavior ‘Utopic Behavior.’ Finally, in Chapter 5, I defend Utopic Behavior as an account of Biologically Basic Agency, as it both meets the various adequacy conditions I have established and demonstrates a clear continuity between human agency and that of other organisms that have also evolved on this planet.

Supervised by Evan Tiffany

The Worst of All Possible Words: Slurs, Reappropriation, and Bracketing by Kelsey Vicars

Slurs can be used to offend, to insult, to derogate, and to subordinate members of the groups they target. Certain slurs can also be used to empower those groups, and to signify a shared social identity. When a slur is reclaimed or reappropriated, the power that the term has to offend and insult is taken back by the members of the group the slur was originally intended to disparage. In this paper I will suggest that the complexity of this phenomenon poses a problem for existing philosophical accounts of slurring terms: common to these is that they incorporate the expression of a derogating or contemptuous perspective into all uses. After a brief discussion of the direction that standard accounts of slurs take, I will explain how reclaimed slurs present a challenge. I argue that the overwhelming focus on pejorative slurs has come at the unfortunate expense of exploring these terms in their reappropriated forms, and thus has largely ignored the important ethical and legal roles these terms play.

I then turn to a discussion of how reappropriated slurs have been treated in the literature. In general, these terms are either regarded as instances of non-literal language, attempts are made to explain them away by appealing to speaker identity, they are deemed insignificant or uncommon, or ignored completely. I argue that all of these strategies are misguided, and particularly, that to ignore or ‘bracket’ these terms is especially problematic. I then present an explanation of the ethical and legal significance of reappropriated terms. I conclude by suggesting what directions an account of slurring that accommodates both pejorative and reappropriated uses might take.

Supervised by Endre Begby

Developing a Fuller Picture of Moral Encroachment by Carolyn von Klemperer

Many hold that our beliefs can be epistemically faultless despite being morally problematic. Proponents of moral encroachment push back against this, arguing that moral considerations can impact the epistemic standing of our beliefs. In particular, the amount of evidence required for a belief to be epistemically justified can be influenced by whether the belief imposes a risk of harm upon the belief-target. In this paper, I point out two related gaps in the current picture of moral encroachment. I argue that the current picture should be expanded from its belief-target restricted focus to recognize risk of harm also to third parties and to the believer herself. Further, I argue that, as this first expansion helps to reveal, the theoretical framework of moral encroachment should allow the evidential threshold for justification to move not only up, but also down. Finally I consider the question of whether moral benefits, and not just moral harms, can play a role in moral encroachment. The purpose of these arguments is to paint a fuller picture of moral encroachment, one that better captures the complexity of our moral landscapes, and in doing so, helps moral encroachment to be more consistent with its aim.

Supervised by Endre Begby

The Role of Non-Causally Related Variables in Causal Models by Weixin Cai

Proponents of the structural equations approach construct causal models to discover causal relationships and represent the causal structure of a system. Many authors who agree with this way of studying causation have required variables in a causal model to be “fully distinct” so that any dependence relationship found in the model must be genuinely causal. In this paper, I argue that this requirement cannot be universally held because non-causally related variables are needed as a general solution to distinguish between causal structures of late preemption and of overdetermination in a model. By incorporating non-causally related variables in a model, we can represent the distinctive causal feature of late preemption in a way different from the model we use to represent the distinctive causal feature of overdetermination. I also explore some alternative solutions without using non-causally related variables and argue that none of them can serve as a general solution to this problem. Then I explain why allowing to incorporate non-causally related variables in a causal model is consistent with a better way of understanding the claim that variables in a model must be “fully distinct”: while the requirement of distinctness should be strictly followed in the context of causal discovery, we can freely use non-causally related variables in the context of causal representation to represent the distinctive causal feature of a causal structure.

Supervised by Holly Andersen

Wright versus Pearson: Causal Inference in the Early Twentieth Century by Zili Dong

Causation has not always been a welcome idea in the history of science. In the early twentieth-century, statistics and biometry were under the dominance of Karl Pearson, who advocated correlation while dismissed causation. So when a method of causal inference called “path analysis” was invented by Sewall Wright during the 1920s, the method was largely ignored. From an integrated history and philosophy of science perspective, this paper argues that contrary to what many might have thought, the marginalization of causal inference in the early twentieth century was to some extent inevitable.
I first depict a Kuhnian picture of twentieth-century statistics which takes Wright’s invention of path analysis as a premature attempt of revolution against the paradigm of correlation analysis. Under this picture, I then examine the formal, conceptual, and methodological aspects of causal inference in the early twentieth century. For the formal aspect, I demonstrate that causal inference methods such as path analysis require unconventional formalisms like causal diagrams, whose power could hardly be appreciated in the early twentieth century. For the conceptual aspect, I argue that Pearson’s positivist conception of causation was well-motivated given the background of science and philosophy during that time. For the methodological aspect, I argue that it was methodologically reasonable to eschew path analysis in empirical research since the application of path analysis relies on assumptions that were often hard to meet.

Supervised by Holly Andersen

Process, not just the product: the case of network motifs analysis by Shimin Zhao

Real-world complex systems are increasingly represented and analyzed as complex networks. In this paper, I argue that the distinctiveness of this network approach to complex systems can be better captured when we focus on the process of network motifs analysis, not only the product, the resulting network explanations. Particularly, by examining the process in which a network motif is identified and analyzed, I argue that network motifs analysis is distinctive because the identification of a network motif relies on the analysis of the target network and the comparison of the target network to the random networks, which means that a network motif is not an internal, but a doubly relational property of patterns with respect to the target network and to the random networks. This doubly relational nature of network motifs, however, cannot be captured by a narrow focus on explanations.

Supervised by Holly Andersen

God Can Do Otherwise: A Defense of Act Contingency in Leibniz’s Theodicy by Dylan Flint

In this paper I articulate and defend an account of contingency found in Leibniz’s Theodicy, which I am calling act contingency. The basic idea of act contingency is simple: While God is, on this view, morally bound to the creation of the best, he is nonetheless free to have done otherwise. This account thus introduces contingency into Leibniz’s thought through the fact that, in a certain sense, God could have created a sub-optimal world. While this account of contingency permeates Leibniz’s mature thought, it has not been well received by current scholarship. The biggest problem with the account is straightforward. It seems that if God has a perfect nature, then it is not possible for him to do anything but the best. This fact, it would seem, renders the creation of other sub-optimal options impossible. Instead of trying to find the root of contingency for Leibniz through the contingency of God’s actions, it has become common practice in recent scholarship to locate the needed contingency through the objects of God’s choice. This has been done through either arguing for the per se possibility of other worlds, or through an account of infinite analysis, which declares it is contingent which world is the best. This paper challenges this established practice. I argue Leibniz in fact had good reason to adopt act contingency and that it is itself a philosophically satisfying account of contingency and divine agency.

Supervised by Dai Heide

"Externalism in Philosophy of Perception and Argument(s) from Dreaming" by Dogan Erisen

Abstract:  A recurrent pattern of debate between the proponents of internalism and externalism over mental phenomena is as follows: externalists pick a target mental phenomenon, say, visual perception, and argue that it has the characteristics it has because of a property that is not possessed internally. Internalists, in return, substitute an analogue mental phenomenon, one that putatively suits their position, to argue that it shows every characteristic that the original target phenomenon shows, thus the allegedly crucial external property plays no ineliminable role. Within these debates a particular analogue phenomenon frequently appears: dreaming. In what follows, I discuss the ways in which externalism comes under dispute through dream phenomena. I then investigate the scientific literature to evaluate whether the way dreaming is conceived by internalists is substantiated by available body of evidence. I conclude that the current state of sleep science does not lend support to internalists’ conception of dreaming.

Keywords: dreaming; externalism; embodiment cognition; perceptual experience; constitutive explanation

Motivating a Variant of Anscombe's Conventionalism:  A Proposal and Defense by Cody Britson

In this essay, I motivate a variant of Anscombe’s metaethical conventionalism as an alternative to a prominent Neo-Kantian view of practical normativity. I accomplish this by first arguing that Darwall’s second-personal view of normativity, an exemplar Neo-Kantian view, has substantive metaphysical issues which arise from internal tensions. While this argument does not provide all things considered reasons to abandon this view and ones like it, it does suggest we should look for a more metaphysically sound alternative. Towards this end, I draw on Anscombe’s work on promising and authority to argue that voluntary participation in social conventions provides the best explanation of the source of practical normativity. I finish by defending this view against the charge of relativity.

"Against Primitivism in Metaphysical Explanation" by Navid Tarighati 

Abstract:  In this paper I will argue that metaphysical explanation cannot be cashed out in terms of the notions of ground and essence alone. First, I will explain each of the two notions of ground and essence. Then, I will argue that the notion of ground can be definitionally reduced to a special type of essence. After addressing the objections that this reduction might face, I will modify the reduction of ground to essence in order to meet the aforementioned objection. I will argue that this new notion of essence, called alethic essence, is not sufficient for explaining that in virtue of which some facts hold. As such, neither ground nor essence individually, nor ground and essence together can serve as the basis for a primitivist approach towards the class of metaphysical explanatory relations.

Cultural Diversity and the Demands of Non-Domination: An Internal Challenge to Phillip Pettit's Neo-Roman Republicanism by Damien Chen

Abstract:  Most modern liberal countries have a diversity of cultures and ethical values. A commitment to ethical pluralism has become a common political consciousness. However, traditional republicanism is vested with thick sectarian ethical values. In Phillip Pettit's neo-Roman republicanism, he seeks to champion a republican understanding of freedom – non-domination – while divorcing from sectarian ethical values in traditional republicanism. This paper mobilizes the conceptual resources of non-domination to present an internal challenge to Pettit about his success in meeting the demands of non-domination and ethical pluralism, which is essentially that cultural minorities could enjoy non-domination while holding on to their ethical values.

I argue that cultural minorities are potentially subject to both the domination by the state and the domination by the westerner cultural majority. Section 1 analyzes the demands of ethical pluralism from the perspective of non-domination and sets the stage by introducing the social background of cultural minority’s vulnerability. Section 2 argues that there is potentially state domination of some cultural minorities, because Pettit’s electoral-cum-contestatory system, which is supposed to secure legitimacy for the state, requires citizens to have a competitive individualist character, which does not reflect the character of many cultural minorities or their ethical values. Hence, in this system, some cultural minorities do not properly control the state. In Pettit’s theory, the lack of control means that state interferences are dominating these minorities. Section 3 is about potential majoritarian domination. This domination stems from republicans’ willingness to interfere in and end minorities’ questionable practices. Since state policies have a heavy influence on public opinions, a liberating policy that depicts a culture negatively could assist the majority’s domination of that cultural minority. I caution against a direct and aggressive approach’s risk of assisting majoritarian domination of minorities through weakening their voices and stigmatizing them, especially the vulnerable sections within them. Instead, I argue for an approach that empowers minorities’ discursive control.

Supervised by Evan Tiffany

"Do You Hear What I Hear?  Individual Differences in Speech Phenomenology" by Dzintra Ullis

Abstract:  Suppose that I, a native speaker of Indonesian, hear the question, “Anda mau beli apa?” How does my experience of what was heard differ from your experience, as a monolingual English speaker? One difference is semantic, i.e., I hear the question as meaningful. Setting semantics aside, surely there are other phenomenal differences between our speech experiences. This paper begins with an examination of Casey O’Callaghan’s (2017), account of speech phenomenology. His account depends upon three levels of phenomenology (low-, mid-, high-), where phenomenal differences stem from the middle-level of phenomenology, i.e., from having phoneme awareness of a language. However, given the unique and highly complicated signal processing tasks the mammalian auditory system faces, and the (likely) unique physiological organization that has come about because of it, O’Callaghan’s view is left in a precarious situation. I focus on two ways the auditory system resolves these complex tasks—through multimodality and life-long plasticity. I argue that there is no single capacity to distinguish speech phenomenologies between speakers of different languages. Even speakers of the same language will likely have distinct phenomenologies, given their histories of language-related experience and training. As it turns out, language speakers are like musicians. Years of training and practice produces musical expertise, impacting both how she plays and what she hears. The same is true of us, in that years of training and practice produce linguistic expertise, impacting both how we speak and what we hear.

Supervised by Kathleen Akins

"Lockean Forensic Persons & The Having of a Body" by Anthony Nguyen

Abstract:  Some insist that Locke’s thesis of self-ownership is in tension with his theory of personal identity. On the one hand, political theorists read Locke’s self-ownership thesis as claiming that each individual has natural ownership rights over their body and labor. Self-ownership rights are said to be natural rights for all persons because they are intrinsic to personhood. On the other hand, many think that Locke’s theory of personal identity posits the person as an essentially disembodied psychological entity because having a body is not necessary for personhood. If, however, having a body is unnecessary for being a person, then why is it that all persons have natural ownership rights over their bodies? Several recent scholars respond to this puzzle by claiming that Locke’s views on personal identity are inconsistent with his theory of natural rights.

I argue against this charge of inconsistency. I argue that Lockean persons are not essentially disembodied psychological entities. Instead, the “person” that is the subject of natural rights are intelligent agents with “forensic” capacities needed for obtaining accountability and moral agency. I argue that in order for a Lockean forensic person to be non-arbitrarily accountable for voluntary bodily acts, it is necessary that such a forensic person has a body, even when the theoretical conditions for being a forensic person do not require having a body. Because forensic persons do have bodies in normal circumstances, Locke’s account of personhood and personal identity is consistent with his theory of natural rights, particularly in terms of having natural ownership rights over one’s body.


Smart Shopping for Inductive Methods by Matthew Maxwell

Abstract:  Conciliatory theories of disagreement state that in the face of disagreement one must reduce one's confidence in the disputed proposition.  Adam Elga claims that this conciliatory principle falls to a self-undermining problem; when the disagreement is about whether or not one ought to be a conciliationist, we find that conciliationism recommends its own rejection.  In order to protect the conciliatory principle from self-undermining, Elga insists that we must be dogmatic about inductive methods and principles.  In this paper I contend that there is no need to do this.  The conciliatory principle does not necessarily fall to the self-undermining argument and, even if the principle could possibly fall to the self-undermining problem, dogmatism is unwarranted.  Because dogmatism is unwarranted we will need a new way of defending conciliationism.  Finally, I argue that the conciliatory principle is best defended empirically.

Supervised by Endre Begby

The Role of Audience in Mathematical Proof Development by Zoe Ashton

Abstract:  The role of audiences in mathematical proof has largely been neglected, in part due to misconceptions like those in Perelman & Olbrechts-Tyteca (1969) which bar mathematical proofs from bearing reflections of audience consideration. In this paper, I argue that mathematical proof is typically argumentation and that a mathematician develops a proof with his universal audience in mind. In so doing, he creates a proof which reflects the standards of reasonableness embodied in his universal audience. Given this framework, we can better understand the introduction of proof methods based on the mathematician’s likely universal audience. I examine a case study from Alexander and Briggs’s work on knot invariants to show that we can fruitfully reconstruct mathematical methods in terms of audiences.

Supervised by Nic Fillion

Kant on the Unity of Spontaneity by Haley Brennan

Abstract: Kant describes absolute spontaneity as an act of transcendental freedom.  It is generally taken that this concept of spontaneity is meant to apply to acts of the autonomous will, not extending beyond practical philosophy.  On this view then, when Kant uses the concept 'spontaneity' to refer to the understanding in his theoretical philosophy it must be a different (thinner or qualified) kind.  I argue that this view is mistaken.  Instead, spontaneity is a unified concept for Kant, one that is always equivalent to transcendental freedom in both practical and theoretical contexts.  I defend this view against two possible objections: (1) that the understanding is causally determined and so cannot be genuinely free, and (2) that this violates Kant's claims about our epistemic limitations.  In doing so, I articulate the way that the understanding is spontaneous, and how we are genuinely free in our rational activity.

Supervised by Dai Heide


The Road to a Fair Standard of Negligence by Ulyana Omelchenko

Abstract: An agent commits a negligent act in the following circumstances: the agent should be aware of a substantial and unjustifiable risk that her action carries to another; she is not aware of the risk; she proceeds with the act and wrongs the other. When she is tried for negligence, the agent’s behaviour is compared to a standard that describes the appropriate conduct for the situation in question. Two competing intuitions influence the content of this standard: we do not want to over-punish the agent for committing a wrong she was not even aware of; however, we also do not want to under-punish her for wronging the victim in a situation where we feel that she should have taken the relevant risks into account.

In this paper I look at the different types of standards of conduct commonly proposed in the literature that try to balance these two intuitions; my final goal is to find the standard that fares the best. I show that none of the standards succeeds in accommodating both intuitions fully, and point out two of the strongest accounts, each of which accommodates one of the intuitions fully, and the second intuition partly. I conclude that there is conceptual space for a standard that would combine the benefits of these two best standards, while solving their problems.

Supervised by Evan Tiffany

Not by Institutions Alone: Empowerment and Human Development by Wil Contreras

Abstract: In Why Nations Fail, Acemoglu and Robinson aim to explain wealth disparities between countries using their distinction between extractive and inclusive institutions. They contend that inclusive institutions promote prosperity, while extractive ones undermine it. They conclude that broad civic empowerment serves as the foundation for inclusive institutions, igniting the path towards prosperity. While a promising theory, this notion of empowerment is left underspecied: what is it exactly? How can it be leveraged to improve people's well-being? I believe better answers can be gleaned from Miranda Fricker's epistemic injustice framework, employing her conceptions of structural identity power and hermeneutical marginalization. Specically, I build on the idea that power spans at least two dimensions, psychological and material; and that, while Acemoglu and Robinson's theory muddles these together, Fricker's framework helps delineate and understand the relationship between the two. Given this relationship, I propose that a marginalized group is psychologically disempowered when it has internalized detrimental beliefs that translate into detrimental behaviour patterns|patterns that keep the group materially disempowered. Thus, empowerment involves abandoning those beliefs, breaking those patterns, and the consequent disruption of the social order, towards greater equality. Ultimately, I conclude that epistemic change precedes social change: that political empowerment begins with psychological empowerment, demanding a more comprehensive approach to poverty relief.

Supervised by Endre Begby

Reflective Immunity: An Essay on Korsgaard's Conception of Action and Reasons by Anthony Moradi

Abstract: According to Christine Korsgaard, an obligation can be justified as such only by virtue of our own commitments and the practical identities that we occupy by virtue of those commitments. Insofar as this claim is accepted, then an apparent problem presents itself which Korsgaard herself identifies as “the paradox of self-constitution”. This problem is namely that a proponent of this view seems to be left with the potentially intractable burden of having to explain how our practical identities can be the ultimate normative source for our actions, if those practical identities are themselves constituted by our actions. Korsgaard’s own solution is to suggest that the adoption of a practical identity should not be conceived of as a particular mental event, or a particular goal that we can achieve, but rather as an inescapably ongoing and intrinsically reason giving activity that defines action itself. In this paper, I argue that such a conception of action is inadequate to encompass the activity of human agency in its entirety, and that this is because it cannot, by definition, encompass our capacity to make norms the object of our reflective scrutiny. I argue further that insofar as this is the case, it will have to be granted that the activity of reflective scrutiny is immune from the normative authority of even our own prior commitments.

Supervised by Evan Tiffany

Sparse Coding Explanation in Computational Neuroscience by Imran Thobani

Abstract: This paper investigates the nature of explanations in computational neuroscience, a field that has received relatively little attention in philosophy of neuroscience. It does so by examining a sparse coding explanation, in which the researchers construct a computational system for encoding images using a technique known as sparse coding. By observing how the computational system behaves depending on which image is being represented, the researchers can account for similar behavior in the response properties of neurons in the human visual cortex. In Section 3, I argue that this explanation has two crucial features, and I explicate each in detail. One is an instantiation relationship between a set of coefficients in the brain, which are used to represent visual information, and a set of firing rates of simple cells in the visual cortex. The other feature is a relationship between the sparse coding system designed by the researchers and a representational system in the brain. This relationship consists partly in the fact that the two systems encode sufficiently similar visual information using a set of coefficients that behave in a sufficiently similar way between the two systems. Because of this relationship, the sparse coding system has receptive fields that have the same coarse-grained spatial properties as receptive fields in the brain. However, for the explanation to work as it does, it is also crucial that the two systems not be exactly alike. The fine-grained differences between the two systems allow the researchers to explore the space of possible representational systems in a way that isolates the relevant properties of the brain that account for response properties of simple cells.

Supervised by Holly Andersen

On Cauchy's Rigourization of Complex Analysis by Gabriel Lariviere

Abstract: In this paper, I look at an important development in the history of Western mathematics: Cauchy's early (1814-1825) rigourization of complex analysis. I argue that his work should not be understood as a step in improving the deductive methods of mathematics but as a clear, innovative and systematic stance about the semantics of mathematical languages. Cauchy's approach is contrasted with Laplace's use of the "notational inductions," predominantly used in the calculation of various definite integrals. I suggest that Laplace's techniques are partly explained by the influence of some of Condillac's ideas about the language of algebra and abstract quantities. Cauchy's opposition is then not to be seen as stemming from a comeback of geometric and synthetic methods, as Argand's theory might be. Instead, from within this algebraic tradition, Cauchy rejected the key Condillacian doctrines that algebra is about abstract quantities and that its grammar provides means of discovering new mathematical truths. He thereby reduced the gap between arithmetic and algebra and fruitfully extended his approach to imaginary numbers, a significant way in which he contributed to the arithmetization of calculus and the development of complex analysis like no one before him. I finish by discussing lessons we can draw about how mathematical rigour differs from rigour in other sciences.

Supervised by Nic Fillion

Why Two (or more) Belief Dependant Peers are Better than One by Arianna Falbo

Abstract: The following principle is widely assumed in the literature surrounding the epistemology of peer disagreement: When S disagrees with a group of epistemic peers P1, P2, P3. . . Pn concerning the truth of p, if S has already rationally accounted for the dissent of P1, then S should not consider the dissent of P2, P3. . . Pn if the beliefs of these subsequent dissenters are not independent of P1’s belief that p. I call this the ‘Belief-Dependence Principle’ (BDP) and argue that it is false. This principle overlooks the importance of a peer’s epistemic perspective, which can itself provide valuable psychological evidence. I argue that the degree of psychological evidence offered by a group of dissenting peers is partly a function of how distinct their perspectives are from one’s own and from those of other peers whose opinions one has already taken into account. In closing, I discuss how denying BDP brings into focus the importance of diverse perspectives to epistemic practices.

Supervised by Endre Begby

Testimony without Assertion by Mahan Esmaeilzadeh

Abstract: In this paper, I argue against closely associating testimony with only those speech-acts that convey high confidence, such as assertions. Given that one’s justification in holding a belief can come in degrees, there seems to be no reason to restrict the range of beliefs that the speaker can testify to with the intent of informing her hearer to only those beliefs that pass a certain threshold of justificatory strength. In doing so, I emphasize the distinction between the speaker’s expressed confidence and her actual evidential standing. Arguing that while the former confers the right of complaint to the hearer, it is the latter that justifies the hearer’s belief.

Supervised by Endre Begby


On Signaling Games and their Models by Travis LaCroix

Abstract: To communicate meaningfully, members of a linguistic population must cooperate—i.e., they must agree upon some convention: one member should use a word to mean one thing if the other members use that word to mean the same thing. Conventional language-use in a population of speakers can be modeled using game-theoretic tools—this idea gives rise to the so-called signaling game. To explain how communication might evolve, contemporary scholars—following the work of Skyrms (1996)—have been using increasingly complex game-theoretic models which purport to capture readily observable linguistic phenomena in both human and non-human populations alike.

Shortly after the publication of Skyrms’ (1996) evolutionary account of social norms, and the introduction of evolutionary explanations of meaning, D’Arms, Batterman, and Gorny (1998) pointed out how surprisingly little critical attention this use of evolutionary models has received. Since 1998, a significant amount of work has been done applying the same evolutionary principles in the context of signaling games and the evolution of language. The models used in these various works have become more sophisticated and vastly more numerous; however, the practice of modeling signaling within an evolutionary framework has still received surprisingly little critical attention.

This paper applies the theoretical criteria laid out by D’Arms, et al. to various aspects of evolutionary models of signaling. The question that D’Arms, et al. seek to answer can be formulated as follows: Are the models that we use to explain the phenomena in question conceptually adequate? The conceptual adequacy question relates the formal aspects of the model to those aspects of the natural world that are supposed to be captured by the model. Moreover, this paper extends the analysis of D’Arms, et al. by asking the following additional question: Are the models that we use sufficient to explain the phenomena in question? The sufficiency question asks what formal resources are minimally required in order for the model to get the right results most of the time.

Supervised by Nic Fillion

Aristotle's Theory of Explanatory Entailment by Brad Zurcher

Abstract: Within the last sixty years, work by Jan Lukasiewicz (1957) and John Corcoran (1974) among others has reinvigorated interest in Aristotle's syllogistic logic by examining its relation to modern deductive systems. It is now generally agreed that Aristotle's syllogistic system possesses the rigor and precision necessary to permit formalization as a modern symbolic system. Though limited in the logical forms it can express, it is claimed that Aristotle was examining the same concept of logical consequence as modern logicians. However, this redemption of the syllogism as a rst-class logical system only highlighted a previous perplexity: if Aristotle was a logician of such consummate skill and precision, why is it that the deductive system he developed is so expressively impoverished?

In this paper I provide an answer to this question by arguing, contrary to the current consensus, that Aristotle's syllogism was not intended to capture the concept of logical consequence simpliciter but instead a stricter notion of consequence|Explanatory Entailment|in which the premises of a deduction must not only necessitate, but also present an explanation for the truth of their conclusion. I then provide textual evidence from the Posterior Analytics that Aristotle believed this requirement could only be satised by the syllogistic system. I conclude that the limited expressive power of the syllogism is not the result of an oversight on Aristotle's part, but is instead a consequence of his concept of explanatory entailment and his belief that this concept could only be captured by the categorical structure of the syllogism.

Supervised by Nic Fillion

From a Deflationist Theory of Truth to Normative Meta-Epistemology by Graham Moore

On the Experience of Passage and the Temporally Evolving Point of View by Hesham Kahil

Clear and Distinct Perception and Self-Knowledge in Descartes by Michaela Manson

Kantian Moral Character and the Problem of Moral Progress by Nikolas Hamm

Abstract: There appears to be a flaw in Kant's theory of moral change. More precisely, the duty of moral self-improvement seems to be at odds with fundamental aspects of Kant's metaphysics of freedom. Kant claims that the determination of moral character must be through a free moral choice, and that free choice is only possible insofar as it lies outside space and time. Thus moral progress poses a problem for Kant; change presupposes temporal extension, and yet Kant is adamant that temporal properties cannot apply to noumena. 

This concern has been raised by a number of commentators as evidence against metaphysically laden interpretations of Kant's notions of agency and character. In what follows, however, I demonstrate that the problem Kant faces is merely illusory. Rather, a careful analysis of the nature of moral character and the process of moral conversion reveals that, due to the dual nature of humans - as both noumenally and phenomenally affected beings - an improvement in an agent's phenomenal incentives can suffice to explain moral conversion, provided that the motive for such change comes from the moral law itself.

Supervised by Owen Ware

Happiness as an Ideal of Non-Moral Choice in Kant by William Braedson

Abstract: I defend a reading of Kant’s account of non-moral choice that gives due weight to Kant’s views on happiness and prudential principles. The received or “intellectual” view on Kant emphasizes the role of objective reasons. On this view, the paradigm form of choice is that which is guided by principles or values that are universally valid; this makes non-moral choices merely erroneous or misinformed attempts at moral choice. The view I defend takes into account Kant’s remarks on sensible inclinations. As human beings with sensible needs, we have a “commission” to pursue happiness, i.e. the greatest possible satisfaction of one’s desires. However, since the concept of happiness is necessarily indeterminate, everyone is faced with the task of deliberating about what is constitutive of one’s own happiness. This deliberation opens a space for non-moral (hedonic) choice that does not structure itself according to the intellectualist commitment to universally valid reasons. Furthermore, since one brings a given sensible inclination under a hedonic maxim, my account is able to meet the requirements of intelligibility and spontaneity that motivate the intellectualist account.

Supervised by Owen Ware

Breaking Promises: Objective and Attitudinal Wrongs by Brittany French

Abstract: As a bipolar duty, promissory obligations place the promisor in a position to wrong her promisee if she fails to fulfill her obligations to the promisee without a good excuse. In the event of afailed promise that wrongs a promisee, wronging is incurred onto the promisee. There is an ‘objective wrong’ that results from failure to complete the actions necessary for fulfillment of the contents of the promise. I argue that there is another kind of wronging, I call ‘attitudinal wrong’ that could be incurred on the promisee in cases where the promisor’s attitudes express a disregard for the promisee to whom she owes an obligation. I build off of the concept of ‘normative injury’ discussed in Jay Wallace’s forthcoming “Lectures” to give shape to the two kinds of wrongings. As a set-up to my thesis, I provide the architecture of defect promises, arguing that in only three of the five classes of failed promises does a promisee suffer some form of wrong.

Supervised by Evan Tiffany


Machine Learning as Experiment by Kathleen Creel

Abstract:  Scientific inquiry increasingly depends on massive data sets, such as the data produced by the Large Hadron Collider or by government agencies for study of the social sciences. While the epistemic role and experimental status of computer simulation has been recently discussed, the relationship between these datasets, the computational techniques used to find patterns in them, and the phenomena the datasets describe require more extensive philosophical analysis. I propose that machine learning, a branch of artificial intelligence research that focuses on algorithms designed to improve their performance at tasks over time, and its algorithms, do not merely detect patterns in data, as has been suggested. Rather, two types of machine learning, genetic programming and deep learning, deserve the epistemic status of experimental methods. By analyzing patterns in data, genetic programming and deep learning enable us to discover broader range of phenomena than traditional techniques of investigation can. These varieties of machine learning possess a characteristic cluster of the features of experimentation and contribute to scientific inquiry in their disciplines in ways similar to the contributions of experiments in other disciplines. Because of these two factors, which I suggest constitute a definition of experiment, scientific activities that include these types of machine learning have the epistemic status of experiment.

Supervised by Holly Andersen

What is rationality good for?  A game theoretic perspective by Kino Zhao

Abstract:  Game theory is sometimes defined as “the study of interactions among rational agents”. This perspective has proven successful in many disciplines, such as economics, behavioral psychology, and evolutionary biology. It also faces some challenges, many of which relate, in one form or another, to how rationality is defined and utilized. Why does game theory need rationality? Or does it? In this paper, I examine the roles rationality plays in three subfields of game theory: classical, epistemic, and evolutionary. In particular, I look for roles that seem to require rationality, and question whether they can be played by any alternatives. I later develop two examples to explore the possibility of using game theoretic models without assuming players to be rational. Using the examples, I argue for my two theses: 1) the rational reasoning process should not enjoy any “privilege” over heuristics when used to interpret player behavior, except when they differ in their predictive outcomes; 2) discrepancies between rational predictions and experimental evidence should be treated as a call for alternative perspectives, instead of as a reason to discredit the use of game theoretic modeling.

Supervised by Nic Fillion

The Relational Conception of Equality by Devon Cass

Abstract:  A prevailing assumption in political philosophy is that equality essentially requires that individuals are entitled to a (pro tanto) equal distribution of goods. Critics claim that this view misses the point of equality and argue that its value should be understood as involving the quality of social relationships more broadly. This paper examines the distributive implications of this ‘relational’ conception of equality. I argue that the existing proposals of ‘relational equality’ offered by Samuel Scheffler and Elizabeth Anderson have overly ambiguous or implausible implications for distributive justice. In turn, I develop an account of relational equality that addresses these issues.

Supervised by Sam Black

Territorial Rights and Cultural Attachment by Joshua Ogden

Abstract:  Sovereign states exercise territorial rights:  to control jurisdiction, borders, and resources. While theorists have long discussed the justification of state rights over people, the justification of state rights over territory has, until recently, been largely taken for granted. In this essay, I argue that the state is not the ultimate holder of territorial rights; nor have territorial rights been transferred to the state from individuals.  Rather, states exercise territorial rights on behalf of nations – a type of cultural group. I propose an account that differs from other such nationalist accounts, for one, by being entirely present-oriented in its justification – denying, for instance, the quasi-Lockean thesis that a nation’s history of ‘developing’ the land can, in itself, directly generate territorial rights. I argue instead, from a principle of treating people as equals, that territorial rights ought to be assigned to nations on the basis of present cultural attachment to their respective homelands.

Supervised by Sam Black

The Referent of ‘Reference’ and the Philosophy of Philosophy by Brent Stewart

Abstract:  Naturalism suggests two types of constraints on inquiry. The first is that lower and higher level theories should be able to be integrated. For example, if one theory assumes something that another theory rules out – then something has to give. The second (foregrounded by Huw Price) is that the results of the human sciences should be used to calibrate the process of inquiry itself. The human being is the most basic instrument of inquiry and what we learn about ourselves has direct relevance for the methodology of inquiry. Using the first constraint, I explore what impact the success of enactive cognitive science might have on theories of linguistic reference. I argue that the two main contenders – the descriptive and causal-historical theories – do not mesh well with the basic tenets of enactive cognitive science. In response, I propose a sketch of a new theory – attentional theory – that fares better. Next, using the second constraint, I look at how adopting attentional theory might change philosophical practice. The result, I argue, is a change in how philosophers can appeal to language in constructing arguments and a subsequent refocusing of what makes for a good philosophical target.

Supervised by Ray Jennings 

The Phenomenology of Becoming a Moral Agent by Emily Hodges

Abstract:  While we often assume that individuals are essentially capable of being moral agents, it is not the case that we are all automatically moral. In fact, many of us feel that being a moral agent is not just something that we ought to do, but something we ought to commit to even if we do not always get it right. Perhaps becoming a moral agent is a process that takes time, hard work, and commitment. In this paper I argue that becoming a moral agent involves a process comprising two critical moments. To understand this process, I explore the moments of becoming a moral agent as described by Kant and Levinas. In the Kantian moment, the individual becomes a moral agent when she realizes the objective authority of the moral law, which subjectively grounds moral action through the experience of respect. Yet the Kantian account does not seem able to grant others the full dignity they deserve as ends-in-themselves. I therefore explore Levinas’s account wherein the individual experiences the other as the ground of moral obligation. Though such grounding grants the other dignity, the Levinasian account cannot give us a moral standard or self-respect. I conclude that the two moments can be synthesized into a single process of moral cultivation. Here, the moral law is created through the individual’s encounter with a “second-personal demand.” However, it is only upon conscious self-legislation of the moral law that the agent takes on the higher vocation of being a responsible interlocutor that answers the ethical demand of the other respectfully. As the self is here transformed into a moral agent, the subjective motivation is self-respect as well as respect for the other. The moral self is connected intrinsically to the other, grounding morality in the dignity of both the self and other.

Supervised by Evan Tiffany

Kant on Givenness and the Phenomenological non-Presence of Space and Time by Rosalind Chaplin

Abstract:  A number of contemporary Kant scholars hold that Kant believes space and time are ‘given’ as phenomenologically present to the mind in sensibility alone. I argue that Kant in fact denies that either space or time is epistemically accessible to us in mere sensibility and that his considered position is that we depend on synthesis for awareness of the basic properties of space and time. However, this is not to say that space and time themselves depend on synthesis. As I argue, Kant believes that our epistemic access to space and time depends on synthesis, but this is compatible with the claim that space and time are ‘given’ in sensibility alone. In the context of his discussion of space and time, Kant takes ‘givenness’ to be a metaphysical notion rather than an epistemological one, and part of what is distinctive about Kant’s theory of space and time is his effort to pull metaphysical and epistemological priority apart.

Supervised by Dai Heide

A Lawful Freedom: Kant’s Practical Refutation of Noumenal Chance by Nicholas Dunn

Abstract: This paper asks how Kant’s mature theory of freedom handles an objection pertaining to chance.  This question is significant given that Kant raises this criticism against libertarianism in his early writings on freedom before coming to adopt a libertarian view of freedom in the Critical period. After motivating the problem of how Kant can hold that the free actions of human beings lack determining grounds while at the same maintain that these are not the result of ‘blind chance,’ I argue that Kant’s Critical doctrine of transcendental idealism, while creating the ‘conceptual space’ for libertarian freedom, is not intended to provide an answer to the problem of chance with respect to our free agency. I go on to show how the resources for a refutation of chance only come about in the practical philosophy. In the 2nd Critique, Kant famously argues for the reality of freedom on the basis of our consciousness of the Moral Law as the law of a free will. However, Kant also comes to build into his account of the will a genuine power of choice, which involves the capacity to deviate from the Moral Law. I conclude by showing that this apparent tension can be resolved by turning to his argument for the impossibility of a diabolical will. This involves a consideration of the distinct kind of grounding relationship that practical laws have to the human will, as well as the way that transcendental idealism makes this possible.

Supervised by Dai Heide

Is Conciliationism Incoherent? by Mike Perry

Abstract:  Conciliationism—according to which whenever you find yourself in disagreement with an epistemic peer about a proposition, you should significantly shift your credence in that proposition in their direction—is an epistemic rule that is both intuitively compelling and useful. Adam Elga has argued, however, that Conciliationism is incoherent because it is self-undermining (it sometimes says that you should reduce confidence in itself). After a brief discussion of epistemic rules in general, I explain and evaluate Elga’s argument. The argument, it turns out, threatens not only Conciliationism, but almost any epistemic rule that it’s sometimes rational to doubt. Thankfully, it is persuasive only if we ignore an important distinction between different senses in which it can be true that someone “should” do something. Conciliationism does not emerge unscathed, however. Although the argument doesn’t show that Conciliationism is incoherent, it does show that it’s not as useful as it might initially seem.

Supervised by Endre Begby


Enactivism: Filling in the Gaps by Nicole Pernat

Abstract:  Enactivism is an anti-representationalist version of embodied cognition that aims to explain visual perception and perceptual presence. It states these are grounded in tight couplings between sensory input and motor output called “sensorimotor contingencies (SMCs). I argue that enactivists Kevin O’Regan and Alva Noe are unclear on the nature of SMCs and their mastery. Explaining such theoretical constructs requires describing their physical mechanisms (Andersen, 2011; Machamer, Darden, Craver, 2000), which in turn requires clarifying what mastery, in this case, is. I outline four possible interpretations and show why the first three do not serve the enactivists’ purpose. Grush (2007) provides the fourth interpretation: emulation. He provides necessary details on the nature of sensorimotor processes, and convincingly argues that mastered SMCs are emulators – which are representational. Enactivists cannot detail their view sufficiently to explain perceptual presence and remain anti-representationalist. I conclude, contra enactivists, that mastered SMCs must probably be representational.

Supervised by Kathleen Akins

Doxastic Normativity Without Epistemic Justification by Sikander Gilani

Abstract:  This paper sketches a type of justification for beliefs that does not aim for the truth, namely phenomenal justification. The type of justification suggested is structured by experience alone, and not by truth, utility, perfection, or any other traditionally recognized source of doxastic normativity. In making a case for phenomenal justification, I explicate and motivate the problem of the criterion, and show how phenomenal justification has an advantage over epistemic justification in avoiding the problem.

Supervised by Phil Hanson


On the ‘Evolutionary Contingency Thesis’ Debate by Tiernan Armstrong-Ingram

Abstract:  John Beatty developed the ‘Evolutionary Contingency Thesis’ – that all outcomes of evolution are contingent – in support of the conclusion that there are no laws in biology. Contra Beatty, Elliot Sober has argued against the move from contingency to lawlessness. Lane Des Autels criticizes Sober’s reply to Beatty, claiming that Sober admits of too many laws.  Here, I argue three points relevant to this tripartite debate: First, Sober’s attempt to circumvent the contingency problem strips away the causal element found in Beatty’s account of contingency. Second, Des Autels has unfairly characterized Sober as granting law-hood to any contingent generalization we might choose, when Sober is merely arguing that contingent generalizations should not be systematically excluded from consideration for law-hood. Third, Sober’s response to Beatty only succeeds if one also holds Sober’s conception of what constitutes a law, which Beatty does not. Fundamental to their disagreement is a difference in how Beatty and Sober conceive of what does or does not count as a ‘law’, and that issue extends well beyond biology and the philosophy thereof.

Supervised by Holly Andersen

The Argument From Perceptual Variation Reconsidered by Jane Chen

Abstract: Jonathan Cohen argues for relationalism about colour by what he calls ‘the argument from perceptual variation’. Cohen’s relationalism claims that whether a stimulus has a particular colour or not depends on who perceives the stimulus and the viewing conditions under which the stimulus is perceived (including illumination, background and so on). For example, if a single spectral light appears pure green to Subject A and bluish green to Subject B, then it is pure green to Subject A and bluish green to Subject B. To support this position, his argument from perceptual variation argues that there is no principled reason to choose between the two distinct perceptual effects (that are usually called “phenomenal appearances”/ “experiences”/ “visual states”) of the two subjects, and that anti-relationalism implies a selection between them and therefore involves stipulation, which should be avoided when possible. To criticize this argument, I argue that there is at least one coherent and plausible anti-relationalist account of colour that does not imply a selection between the two perceptual effects and thus also avoids stipulation. To make my point, I construct a hypothetical case called “the L box” which exactly parallels the case of perceptual variation of colour and argue that a relationalist conclusion about the L properties does not follow. I anticipate three objections with regard to colour terms. My conclusion is that perceptual variation should not pose a problem for anti-relationalism in the case of colour.

Supervised by Kathleen Akins

A Comparison of William James and Nietzsche on Consciousness and Will by Vera Yuen (Note: this is a thesis)

Abstract:  My thesis compares William James’ and Friedrich Nietzsche’s construals of consciousness and will, two of the core notions in both philosophy and psychology. I delineate the elements that are significant in their respective accounts of the notions, and show that there are interesting and significant parallels in their views. An appreciation of the affinities in James’ and Nietzsche’s accounts of consciousness and will facilitates an appreciation of their remarkably parallel contributions in both philosophy and psychology. It also enhances an appreciation of James as a philosopher with a rich background and expertise in psychology, and an appreciation of Nietzsche as an original, important philosopher-psychologist. Furthermore, the parallels I draw between their views provide materials that substantiate the construal of a strand in contemporary psychology that is philosophically informed, pragmatist, and which embraces a radical version of empiricism.

Supervised by Holly Andersen

On Logical Revision: Field, Maddy, and a Hybrid Proposal by Matthew Harrop

Abstract:  I begin this paper by exploring two approaches to logical revision – one broadly motivated by ‘rational’ considerations and the other by ‘empirical’ considerations. First, I consider Hartry Field’s proposal to adopt a paracomplete logic (FPL) in order to overcome the Liar and Curry Paradoxes. I then examine Penelope Maddy’s proposal, which takes a weak “rudimentary logic” to capture both the logical structure of the macro-world of our everyday experience and the logical structure of our cognition.  After pointing out and attempting to overcome particular difficulties faced by these two accounts of logical revision, I sketch a conception of logic that involves a novel combination of what I take to be the plausible elements in Field’s and Maddy’s views – namely that the logical structure of the macro-world has influenced the logical structure of our cognition via natural selection and that our choice of which all-purpose logic to adopt is settled in large part by an evaluation of how well these logics allow us to achieve our epistemic goals. One of the upshots of this hybrid account is that it takes seriously the problem of modesty – a conceptual puzzle associated with the rational revision of logic – while still allowing for the rational revision of logic. Another upshot is that it allows the rational revision of logic to be motivated by either ‘rational’ or ‘empirical’ concerns. I conclude the paper by considering challenges to my proposal, specifically those stemming from its (in)ability to accommodate dialetheism.

Supervised by Phil Hanson

Disagreement:  What’s It Really Good For? by Jesus Moreno

Abstract:  In this essay I will argue that the practice of inquiry presupposes the possibility of seeing each other as disagreeing on normative matters. If inquiry is a constitutive feature of being the kind of beings that we are, that is if inquiry is indispensable to us insofar as we cannot fail to engage in it, then the possibility of seeing each other as disagreeing on normative matters must be part of the picture presented by our theoretical accounts of normativity. I will base my argument on a distinction between a) first-person and third-person perspectives of cases of agreement/non-agreement, and b) agreement/non-agreement (in either of the two forms mentioned above) and conflict. Conflict, which I shall characterize as incompatibility of preferences, is shown to be neither necessary nor sufficient for either form of disagreement. I will focus on cases of first and third-person disagreement, arguing that they are neither necessary nor sufficient for one another (we can be disagreeing without knowing so, and we can think we are disagreeing without being in third-person disagreement). It is the first-person form of disagreement which I argue proves indispensable for inquiry. I will attempt to show that first-person disagreement on normative matters cannot be reduced to a conflict of preferences – for in cases where such reduction is attempted we would fail to see ourselves, and others, as disagreeing. If an account of normativity entails the dismissal of first-person disagreement on normative matters then such accounts cannot be consistently held – for either it would lead us to employ, in the very practice of inquiring whether the account can be held, what is being dismissed (which is inconsistent) or it would lead us to quietism.

Supervised by Phil Hanson

A Substantive-Argument Alternative to Arguing from Insufficient Evidence by Berman Chan

Abstract:  The first section of this paper will argue that arguing from insufficient evidence may be legitimate in arguments that are meant to encourage some practical outcome. However, in that section I will distinguish those cases from philosophical arguments which are mostly concerned with evaluating arguments for their own sakes, in which I argue that the use of burden of proof is not legitimate. The second section discusses some arguments from insufficient evidence which superficially seem to use burden of proof. I give reasons for approving of some of these arguments, developing and defending a method for these arguments which employ substantive arguments, thus avoiding using burden of proof.

Supervised by Martin Hahn

Fitting a Square Peg into a Eudaimonic Hole:  LeBar, Virtue Ethics and the Second-Person Standpoint by Graham Robertson

Abstract:  A fairly well known objection to virtue ethics is that it is unable to provide the proper sort of reasons that a moral theory ought to: those commonly known as ‘victim-focused’ as opposed to those focusing on the flourishing of the agents themselves. In his paper ‘Virtue Ethics and Deontic Constraints’, LeBar attempts to meet this objection by incorporating Darwall’s second-person standpoint—a metaethical view of moral reasons—into the normative component of a eudaimonic virtue theory. In this paper I argue that LeBar fails to properly incorporate victim-focused reasons, due to the fact that he is unable to account for second-person reasons possessing normative force relative to other reasons that is based solely on the authority of second personal claims.

Supervised by Evan Tiffany

Kant’s Transcendental Exposition of Space and the Principles of Intuitive Reasoning by Christopher Palmer

Abstract:  The Transcendental Exposition of the Concept of Space occupies a peculiar place in Kant’s theory of geometry. Clearly, the Exposition tries to connect his thoughts on space and geometry. What is challenging is understanding the precise connection Kant is drawing. Philosophers have historically interpreted the Exposition as containing an argument regarding our representation of space that is premised upon the epistemic status of geometrical judgments: Because synthetic a priori cognitions of geometry are possible, space must be a pure intuition since no other representation is suitable for these judgments. Modern Kant scholars interpret the arguments of the Transcendental Exposition to present Kant as arguing in roughly the opposite way; his account that space is a pure intuition is a premise to an argument that affirms the possibility of the synthetic a priori cognitions of geometry. From the conclusions about space in the Metaphysical Exposition, Kant (on this reading) already has the material necessary to prove how the kinds of judgments unique to geometry are conceivable.  While I agree with the modern interpretation’s general point that Kant is arguing from a particular conception of space to a claim about geometry, I do not think that he is attempting to prove the possibility of a certain category of cognitions. Alternatively, I will argue that Kant in the Transcendental Exposition is demonstrating how a particular method or procedure is possible for cognizers to attain synthetic a priori cognitions. The arguments of the Exposition establish a principle regarding the a priori determination of concepts, and this principle is indispensable for construction and intuitive reasoning, which is how Kant thinks cognizers attain such synthetic a priori cognitions.

Supervised by Dai Heide