It’s not (just) about the money

By Miriam WiersmaIan Kerridge and Wendy Lipworth

This post originally appeared in August 2018 edition of the Research Ethics Monthly and is reproduced with permission. It can be cited as Wiersma, M., Kerridge, I. and Lipworth, W. (22  August 2018) It’s not (just) about the money. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/its-not-just-about-the-money

Let’s imagine for a moment that you are a mid-career university researcher with growing expertise in a particular field. A pharmaceutical company contacts you and says that it would like to recognise the important work you are doing in this area, and has asked you to choose among the following forms of recognition:

Black and white image of a laboratory hallway, glass doors in the distance

Credit: Bill Dickinson

  1. $10,000 towards a research project related to one of the company’s drugs.
  2. Being chosen as a Keynote Speaker to present at a prestigious conference, with no honorarium.
  3. Being invited to join an international advisory board.

What would you choose? Would you choose the money? Or is there something appealing about the acknowledgement of your expertise in Option B, or impressive status associated with Option C?

Perhaps simply contemplating these questions makes you feel uncomfortable. After all, as medical researchers, questioning what motivates our behaviour or actions beyond the pursuit of scientific knowledge is not exactly pleasant. We like to think that we act in a way that is free from bias – and that while other researchers may have conflicts of interest, we certainly do not. Or not at least conflicts of interest that matter. Which begs the question – what types of things create conflicts of interest (COI)? Is it only when money enters the equation, or are there other forces at play?

It would appear, from the emphasis placed on financial COIs by medical journals, conference organisers and professional societies, that only money matters (Komesaroff et al. (2012)JAMA (2017). The COI disclosure forms that we dutifully complete tend to focus on financial COI and are comparatively vague when it comes to the declaration of non-financial COI (if indeed such declaration is required at all). Similarly, the disclosure statements made by speakers at conferences tend to take the form of ‘Dr X received $$$ from Company Y, $$ from Company Z’ and on the list goes.

But we believe that this exclusive emphasis on money overlooks many other non-financial interests that can create significant COI. These may stem from personal or religious beliefs – for example, Christian views about the moral status of the embryo held by legislators and scientists undoubtedly played a major role in the securing the prohibition of public funding of embryonic stem cell research.

Non-financial COI may also arise from a researcher’s desire for status or respect. As the case study illustrates, pharmaceutical companies may utilise both financial and non-financial incentives to encourage industry collaboration and promote industry agendas.

Personal circumstances and relationships also have the potential to give rise to non-financial COI – for example, if a member of a drug regulatory agency had a close relative who could benefit from the subsidisation of a drug under consideration this would constitute an obvious non-financial COI. Interests such as these have long been recognised in other contexts, including in the public sector (Australian Public Service Commission (2017)OECD (2003)). The OECD Managing conflicts of interest in the public service guidelines state that any ‘forward looking’ policy should describe non-financial sources of COI – including non-financial personal interests and relationships (OECD). The Australian Public Service Commission also specifies that social relationships and personal interests should be declared by employees.

We argue that to overlook non-financial COIs is problematic for several reasons (Wiersma et al. (2018a)Wiersma et al. (2018b). Most importantly, disregarding non-financial COI ignores the fact that serious harm may arise from such conflicts. We need look no further than the notorious Tuskegee scandal (Toy (2017) or Guatemalan ‘research’ (Subramanian (2017) to see that the drive to satisfy scientific curiosity can not only cloud researchers’ judgement, but can also cause significant harm to (unwilling or unknowing) participants.

Furthermore, ignoring non-financial COI also fails to take into account the fact that financial and non-financial COI are frequently entwined. For example, recognition by the pharmaceutical industry as a ‘Key Opinion Leader’ is not only associated with financial remuneration (for example, speaker’s fees), but also status and prestige.

We have also argued that non-financial COI can be managed using similar strategies to those used to manage financial COI (Wiersma et al. (2018a) There is no reason, for example, that a person on a drug regulatory committee could not disclose that they have a relative with a medical condition that may benefit from the drug under consideration and recuse themselves from voting in relation to that particular drug.

Of course, given the highly personal nature of some non-financial interests, it is important that declaration should only be required when evidence indicates that these may lead to a non-financial COI. Here we can draw from the Australian Public Service Commission guidelines which state that a personal interest does not lead to a conflict of interest unless there is ‘real or sensible’ (not merely theoretical) possibility of conflict. It is also crucial that declarations are handled with discretion.

None of this is to disregard the difficulties in determining what precisely constitutes a conflict of interest in medicine and how these should be managed. Medical researchers and practitioners have long grappled with these questions, and heated debate as to what should or should not be considered a ‘COI’ and what types of COI should be managed continues to this day (Bero 2017, Wiersma et al. (2018b).

However, we believe that acknowledging the importance of non-financial COI may be the starting point for a more sophisticated approach to managing both financial and non-financial COI in health and biomedicine. Perhaps most importantly, by acknowledging that we are all conflicted in certain ways, and that having a COI is not necessarily ‘bad,’ we may be able to take some of the ‘sting’ out of the label. And this may, in turn, encourage open discussion and disclosure of both financial and non-financial COI, enhance our understanding of COIs in general, and help us develop and refine a more nuanced approach to all forms of COI.

References

Australian Public Service Commission (2017) Values and code of conduct in practice.Australian Government. Available from: https://www.apsc.gov.au/aps-values-and-code-conduct-practice

Bero, L. (2017) Addressing bias and conflict of interest among biomedical researchers. JAMA: The Journal of the American Medical Association, 317(17): 1723-4.

JAMA: The Journal of the American Medical Association (2017) Conflict of interest theme issue. JAMA: The Journal of the American Medical Association, 317 (17):1707-1812. Available from: https://jamanetwork.com/journals/jama/issue/317/17

Komesaroff, P., Kerridge, I. & Lipworth, W. (2012) Don’t show me the money: the dangers of non-financial conflicts. The Conversation. March 30. Available from: https://theconversation.com/dont-show-me-the-money-the-dangers-of-non-financial-conflicts-5013

OECD (2003) Managing conflict of interest in the public service. OECD guidelines and country experiences. Organisation for Economic Co-operation and Development. Available from: http://www.oecd.org/governance/ethics/48994419.pdf

Subramanian, S. (2017) Worse than Tuskegee. Slate26. Available from: http://www.slate.com/articles/health_and_science/cover_story/2017/02/guatemala_syphilis_experiments_worse_than_tuskegee.html

Toy, S. (2017) 45 years ago, the nation learned about the Tuskegee Syphilis Study. Its repercussions are still felt today. USA Today. Available from: https://www.usatoday.com/story/news/2017/07/25/tuskegee-syphilis-study-its-repercussions-still-felt-today/506507001/

Wiersma, M., Kerridge I. & Lipworth, W. (2018a) Dangers of neglecting non-financial conflicts of interest in health and medicine. Journal of Medical Ethics, 44: 319-322. Available from: https://jme.bmj.com/content/44/5/319

Wiersma, M., Kerridge I. Lipworth, W. & Rodwin, M. (2018b) Should we try and manage non-financial interests? British Medical Journal, 361: k1240. Available from: https://www.bmj.com/content/361/bmj.k1240

Conflicts of interest: All authors had financial support from the National Health & Medical Research Council (NHMRC, grant number APP1059732) for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Miriam Wiersma is a PhD candidate at Sydney Health Ethics, University of Sydney, studying biomedical innovation.

Ian Kerridge is Professor of Bioethics and Medicine at Sydney Health Ethics, University of Sydney, and a Haematologist/Bone Marrow Transplant Physician at Royal North Shore Hospital.

Associate Professor Wendy Lipworth is a bioethicst and health social scientist at Sydney Health Ethics, University of Sydney.

Image by Bill Dickinson and used under Creative Commons 2.0

 

Advertisements

On Not “Getting Out of the Way”: A Reflection on Steven Pinker’s Critique of Bioethics

Title:

Last week well known psychologist, linguist and author Steven Pinker published an op-ed in the Boston Globe, under the title “The Moral Imperative for Bioethics”.*

The article begins with an explicit mention of the CRISP-Cas9 technique for “editing” genomes, and a brief tour through the many diseases which modern medicine and biotechnology is seeking to treat. He goes on to say that research in these areas is essential, a point he frames in terms of reducing the global burden of disease. He then says:

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

Now this is pretty striking, and unsurprisingly many commentators, including myself, were furious. However, I thought I would let the dust settle and see what emerged from the discussion on social media, an email discussion list I am a member of and elsewhere. The stories in Nature and PopSci were reasonable, and Alice Domurat Dreger, Julian Savulescu, Stuart Rennie, Christopher Mayes, Matthew Beard and Russell Blackford make good points.

One point of common ground here, and indeed with Pinker, is that there really is a problem with much bioethics regulation – the processes of research governance and ethics committee oversight. It can be slow, cumbersome, unpredictable, perverse, contradictory and so on. But even so, no one is suggesting we dispense with it altogether – only that it be improved. Pinker himself says:

Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

He also points out, later on in his article, that biomedical technologies which sometimes seemed very promising often fail:

Biomedical research in particular is defiantly unpredictable. The silver-bullet cancer cures of yesterday’s newsmagazine covers, like interferon and angiogenesis inhibitors, disappointed the breathless expectations, as have elixirs such as antioxidants, Vioxx, and hormone replacement therapy.

It is interesting that several of the technologies he mentions didn’t just fail, their failure was often covered up by manufacturers and the regulatory system designed to catch these failures arguably didn’t intervene quickly enough. He doesn’t mention this however; his argument the quasi-libertarian one that scientists should be allowed freedom to explore and innovate without excess regulation. He is less good at noticing the ways in which such innovation takes place in economic conditions which make regulation essential to cope with market failures.

He has two basic arguments for the need for bioethics to get out of the way:

First, slowing down research has a massive human cost. Even a one-year delay in implementing an effective treatment could spell death, suffering, or disability for millions of people.

And:

Second, technological prediction beyond a horizon of a few years is so futile that any policy based on it is almost certain to do more harm than good.

To the first, while on its face it looks plausible, if his second argument is valid, then the problem is we just don’t know whether the research we are doing now actually will save all those lives. Indeed, by investing in this research and not that we may be wasting all those lives he points to. We just don’t know. As to the second, it is almost trivially true, but nonetheless these predictions are what policy-makers and research funders and investors (not forgetting the rest of us) have to make. But he says:

In the other direction, treatments that were decried in their time as paving the road to hell, including vaccination, transfusions, anesthesia, artificial insemination, organ transplants, and in-vitro fertilization, have become unexceptional boons to human well-being.

In other words, some of our bets paid off, some didn’t, and sometimes some of us bet the wrong way on the basis of moral objections which in hindsight look ridiculous. It’s not clear to me why bets the wrong way on other grounds than moral ones get a free pass, but only moral qualms get mocked in this way. I am sure we can make a list of bets the wrong way where moral qualms ought to have played a part and didn’t so we ended up with disastrous consequences for millions. Again, he doesn’t talk about that. The trouble with consequentialist (or decision theoretic?) reasoning of Pinker’s type is that you have to count all the consequences of all the options, not just the ones which favour your own biases. Indeed, if you are going to be a consequentialist you have to be rather good at predicting outcomes (not as bad as he says we are, in other words). But in any case, he reserves special opprobrium for moral reasoning, thus:

Biomedical advances will always be incremental and hard-won, and foreseeable harms can be dealt with as they arise. The human body is staggeringly complex, vulnerable to entropy, shaped by evolution for youthful vigor at the expense of longevity, and governed by intricate feedback loops which ensure that any intervention will be compensated for by other parts of the system. Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.

My initial reaction to the Pinker article was heated. I have spent my entire working life as a bioethicist (roughly speaking, from my first postdoc, as I was not in bioethics as a graduate student or before) with people claiming that what I do is variously a waste of time, a waste of money, ideologically suspect (from any and all directions), intellectually sloppy and so on. Sometimes these criticisms have been levelled at me personally (fine, I can bite back if I need to, and I’m not perfect and sometimes the criticisms have been fair), sometimes at my work (that’s the academic life, I can take it) and sometimes they have been levelled at me and my peers _merely_ because of presumed attitudes, beliefs and values I must have simply because I am a “bioethicist” and “this is what bioethicists think”. Sociologists often do this (not all sociologists…) and historians of medicine often do this (not all historians…). It’s tiresome. If someone wants to know what _I_ think there are various ways of finding out, but a priori judgements of what I “must” think because I am a bioethicist really… get my goat.

My initial reaction to Pinker’s article was that it was an egregious piece of grandstanding which if it had come from the wilder shores of twitter we’d call trolling. However, that would be to attribute motives and intentions to Pinker I cannot verify. What I can say is that it is in a reasonably well established genre of writing which appears quite frequently in the professional medical press (for example, an unsigned editorial about 15 years ago in the Lancet titled “the ethics industry”), and it is perfectly reasonable and sensible to look at the writing as a genre piece, focus on its rhetoric, implied audience and so on. We should also look directly at the arguments, and when I’d cooled off and read Julian and Alice’s pieces I can see that there are arguments in Pinker’s article which have merit and I agree with them. But still, the rhetoric matters. Just as if I am parking my car and someone comes up in my face and shouts “get out of the way” I am liable to take that as a verbal assault even if he then gives me some good and compelling reasons why I might like to move my car a little. This person knows the effect of getting in my face and shouting at me, and it has little to do with the merit of his argumentation. Authors of style guides are pretty good at knowing how rhetoric works too. So are linguists and psychologists.

Never mind. I will get over it. I don’t have to take it personally, after all I don’t recognise myself in his description of bioethicists, so presumably he’s not talking about me anyway. (He wouldn’t know me from Adam).

Turning once more to the arguments, the one argument which has not been touched on directly, and I think is important, is that he says bioethics should not thwart:

…research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

Several commentators have questioned whether we can be that sanguine about how well bioethics does the latter. But it’s the former that bothers me. He’s making an asymmetrical argument: we should discount hypotheticals about bad things; but instead be guided by hypotheticals about good things. To my way of thinking they are no less hypothetical. There’s a branch of the sociology of knowledge which explores this in some detail, the sociology of expectations, and one of the main findings in that field is the biomedicine relies on creating narratives about plausible social futures of technologies in order to attract investors and research funders, persuade regulators and so on. No one, nowadays at least, ever chucks money at researchers saying, go and do something interesting, tell us when you’re done.

So my more general point is that bioethics is _precisely_ a way of telling stories about new technologies and exploring them and seeing what we make of them. There are other ways to do this – for example, as Pinker’s own example shows, making films, writing novels and stories, and so on. Or indeed writing business plans, IPOs and suchlike. This is how humans think stuff through. It’s part and parcel of how we make technologies. It’s not an extraneous factor, that can be shoved out of the way. So, once again, rhetoric matters, and not as a bit of optional packaging, but as part of the real work itself.

Some day I am going to write a book about this. Maybe I should thank Pinker for getting my introduction going with a bang.

Richard Ashcroft

Professor of Bioethics
School of Law

Queen Mary University of London

@qmulbioethics

*Pinker has since made further comments found here.

Richard visited VELIM in February 2005 while he was on an Australian Bicentennial Fellow visiting the Centre for Applied Philosophy and Public Ethics, University of Melbourne. He and Dr Ainsley Newson were colleagues at Imperial College London.