Book Reviews

Kevin C. Elliott, Daniel Steel, eds. Current Controversies in Values and Science, Routledge, 2017

As a general claim, most philosophers of science accept that science is not value-free. The disagreements lie in the proverbial details. The essays in Current Controversies in Values and Science, edited by Kevin Elliott and Daniel Steel focus on such details. Like other volumes in the Routledge Current Controversies in Philosophy’s series, this one asks ten well-known philosophers of science to engage with various questions. Each question receives roughly positive and negative responses, though the authors’ nuanced answers make clear that the contrasting views also involve significant agreements.

The first question asks whether we can distinguish epistemic from non-epistemic values.  Hugh Lacey argues that such methodological distinction is not only possible but also desirable. For him, different attitudes are appropriate regarding scientific theories and attention to these different attitudes demonstrates the importance of the distinction. Epistemic –or rather cognitive—values are those that allow us to evaluate how well a scientific theory provides understanding of a particular phenomenon. Non-epistemic values, and in particular social values, on the other hand, allow us to evaluate social arrangements and social institutions and practices. Only cognitive values, Lacey contends, are relevant to deciding whether a theory is impartially held of a set of phenomena.  But scientific theories can be more than just impartially held. They can also be adopted, i.e., used as basis for further research, or endorsed, i.e., used to inform decision-making. According to Lacey, non-cognitive values are relevant to the justification of the attitudes of adopting and endorsing, even if they do not play a proper role in impartially holding a theory.

Phyllis Rooney agrees that a general methodological distinction between epistemic or cognitive values and non-epistemic ones is possible, but she questions the usefulness of a sharp distinction.  Her contention is that rather than a strict delineation, we find a “robust borderlands area” between epistemic and non-epistemic values. Rooney questions the sharpness of an epistemic/non-epistemic values distinction on various grounds. First, she argues, philosophers disagree even about what values count as epistemic or cognitive.  This is so, she points out, because science has a multiplicity of legitimate goals, and what one takes to be scientific inquiry’s primary goal(s) will affect what counts as an epistemic value. Second, non-epistemic values are hardly a uniform group, but more importantly, the use of some of those values, e.g., feminist values, has clearly contributed to the development of epistemically sound theories.  

Although at first sight it might appear that Lacey and Rooney defend opposing sides, the disagreements are more a question of emphasis. For Lacey, the distinction between epistemic/non-epistemic values is important because a failure to make such delineation effectively gives scientists more authority in policy decisions than they should have. Rooney is however concerned that drawing that distinction risks inappropriately delegitimizing the use of some non-epistemic values when conducting research while legitimizing the use of some epistemic values that depend on people’s judgments about what the primary goal of science might be. Both agree that non-epistemic values can and should play very significant roles in scientific inquiry.

The second question tackled in the collection concerns whether science must be committed to prioritizing epistemic over non-epistemic values.  Daniel Steel argues for a qualified priority of epistemic concerns in science. He offers two arguments for his position. First, science, he contends, has an immediate aim, which is to advance knowledge. Second, a rejection of the priority of epistemic values can lead to what he calls the “Ibsen predicament,” wherein attempts to promote a valued social aim can lead to corrupted science. Steel claims that only maintaining the priority of epistemic values can protect us against this outcome.

Matthew Brown presents the opposing view and argues that we should reject any strong version of the priority of epistemic values thesis. He presents three arguments to defend his claim. First, epistemic and non-epistemic considerations are too entangled in scientific inquiry to make talk of prioritization meaningful. Second, non-epistemic values can be defended with good reasons and epistemic values can lead to wishful thinking just as non-epistemic values can. Third, epistemic standards are context and historically dependent. They can be reevaluated in the course of inquiry. For Brown, rejecting the epistemic priority thesis has an added benefit. It forces scientists to consider the social consequences of their work because they have to consider trade-offs between epistemic and non-epistemic values.

In spite of the contrasting answers, it is not clear that Steel’s and Brown’s positions are significantly different. Perhaps as earlier, the differences are more a matter of emphasis. Clearly, neither Brown –as he explicitly says—nor anyone else Steel mentions in his essay think that epistemic considerations are unimportant or that scientists should accept scientific claims on the bases of non-epistemic values alone. It seems that Brown is more concerned with ensuring that scientists take their responsibilities regarding the social consequences of scientific inquiry seriously, and he worries that the fetishization of epistemic values can detract from this. Steel seems to fear that a failure to prioritize epistemic values can result in scientific theories driven by ideological interests. It is not clear, however, that his Ibsen predicament makes the case he wants to make. It does not seem that Dr. Stockman faces a conflict between epistemic and non-epistemic values, but one between various non-epistemic values: to protect the town’s livelihood or to risk some people’s health. There is no need to deny the results of the study. The conclusion of the study, i.e., that the baths are contaminated, do not mandate a particular policy action. To believe that it does, is to misunderstand the role of science in policymaking.

If the previous authors seem to disagree mostly on the details, Heather Douglas and Gregor Betz disagree on their answer to the question they are addressing: whether the argument from inductive risk justifies incorporating non-epistemic values in scientific reasoning. Indeed, Douglas and Betz agree on much of the details but differ on what follows from them. For Douglas, because most science is inescapably uncertain, scientists must make value judgments about the consequences of error. This is so, she argues, if science is to be useful for policymaking. Hence, scientists not only incorporate value judgments when making scientific claims, they have a duty to do so because of the authority that science has. In her view, the existence of inductive risks justify scientists using non-epistemic value judgments in scientific reasoning.

Betz, on the other hand, agrees that much socially relevant science is uncertain and that scientific evidence should inform public policy. He rejects the need for scientists to close the uncertainty gap by making non-epistemic value judgments. For him, scientists can deal with uncertainty by disclosing it to policy makers in various ways: spelling out the consequences of all the alternatives; altering the conceptual framework used for their research; quantifying the uncertainties in terms of probabilities; and by making the non-epistemic value judgments transparent.  Moreover, for Betz it is ethically inappropriate for scientists to incorporate non-epistemic value judgments. In democratic societies, that is the job of policy makers not of scientists.

Both authors believe their arguments have implications for the value-free ideal of science, i.e., the ideal that scientists ought to refrain from incorporating non-epistemic value judgments in scientific reasoning. For Douglas, the existence of inductive risks and the duty that scientists have to offer informative policy advice undermine the value-free ideal. For Betz, the fact that scientists can offer informative scientific claims without the need to incorporate non-epistemic value judgments vindicates the ideal. However, one can agree with Betz that the inductive risk argument is insufficient to undermine the value-free ideal and still reject such an ideal because non-epistemic values are incorporated in many other ways in scientific reasoning (de Melo-Martin and Intemann 2016).

The fourth question focuses on whether the social value management ideal espoused by Longino can incorporate all epistemically beneficial diversity while also excluding problematic moral and political points of views. Kristina Rolin argues that such is the case. Although she recognizes that Longino was particularly concerned with diversity of values because of its epistemic benefits, Rolin contends that the social management ideal could include other types of epistemically beneficial diversity, such as diversity of standpoints, theoretical approaches, or research strategies. She further argues that although the social management ideal requires that scientific communities share some standard of evaluation for transformative criticisms to arise, such requirement need not exclude a diversity of views. This is so because the share standard requirement should be interpreted in a thin way, allowing for the incorporation of diverse points of views. This does not mean that anything goes. For Rolin, the tempered equality and the uptake criteria espoused by the social value management model serve to exclude inappropriate values, such as sexist and racist ones, from consideration.

Kristen Intemann recognizes the important contributions of the social value management model towards advancing the aims of feminist philosophy of science but argues that it is insufficient. For her, the type of diversity that the model calls for, i.e., diversity of values and interests, and the role that values play in advancing objectivity are too limited. Because the context in which research happens makes certain points of view or certain values more likely to be represented or heard, the mechanisms used by the social value management model to exclude them will actually fail to do so. What we need, Intemann argues, is not value management but the explicit endorsement of social justice values. Endorsing such values will exclude sexist and racist values from consideration when making science.

Like in some of the previous chapters, the differences between viewpoints presented here are not substantive. The different answers are again more the result of attending to different aspects of the question. Rolin is concerned with defending the social value management ideal against claims that it does not allow for appropriate types of diversity and that it is too inclusive, thus allowing the incorporation of problematic values. Intemann is concerned with advancing the aims of feminist philosophy of science and in that respect she finds the social value management wanting because it fails to attend to social diversity and does not have mechanisms to exclude values that are inconsistent for feminist values.

The final question in the volume focuses on the type of research funding system that would best serve values of social justice and democracy. James Robert Brown and Julian Reiss agree that much is wrong with the status quo, but they arrive at different conclusions regarding this question. For Brown, the influence of commercial interests in science is problematic because of their corrupting effects and the skewing of the research agenda. He believes that the best way to address both problems is to socialize medical research, that is, to fund it primarily by taxes. A socialized research system would do away with IP rights, such as patents, and would result, he believes, in financially disinterested researchers who could impartially conduct research. More importantly, it would allow researchers to expand the range of options to consider when addressing medical problems.

For Reiss the problems with commercialized research are not so much the result of the influence of private funding but of not enough free market. He agrees with Brown about eliminating patents, but in his case is because patents involve a kind of market interventionism that stifles competition. Similarly, he rejects drug regulation by government bodies such as the FDA. From an epistemic point of view, Reiss argues, the lack of clear and judicially enforceable evidentiary standards gives the FDA reasons to set the bar too high, thus leading to the inappropriate exclusion of some drugs. Moreover, the FDA takes away individuals’ ability to decide how much risk they are willing to accept from certain medications. For Reiss, a true free market would give companies an incentive to produce the best products possible and would allow citizens to make decisions about how to trade off risks and benefits.

What research funding system can best promote research integrity and a socially responsive research agenda is ultimately an empirical question. Nonetheless, perhaps Brown has too much faith in the ability of public institutions to achieve these goals and underestimates the value of private funding. On the other hand, Reiss seems to have too much faith in the good workings of a free market for biomedical research and underestimates the many ways in which power differentials and unjust social conditions can make it difficult for citizens to be appropriately informed and easier for drug companies to try to game the system.

Many things speak in favor of Current Controversies in Values and Science. Although the questions addressed are not the only relevant issues in debates about values and science, they are significant ones. Moreover, the authors included are some of the main protagonists in these debates. The fact that the authors of contrasting positions engage each other makes the essays more appealing and relevant. Even when the disagreements between authors are minor, differences in emphasis and concerns are significant when approaching debates about the relationships between non-epistemic values and science. Anyone interested in such issues would gain greatly from reading this volume.

 

Inmaculada de Melo-Martín

Division of Medical Ethics

Weill Cornell Medicine—Cornell University

New York, NY

 

References

de Melo-Martín, I and Intemann, K. The Risk of Using Inductive Risk to Challenge the Value-Free Ideal, Philosophy of Science, 83 (2016): 500–520.

Leave a Reply

Your email address will not be published. Required fields are marked *