Does the outcome change the value of the decision?
On organ donation and rational decision-making: reflections on how we judge our decisions based on hindsight.
A friend of mine recently donated a kidney to a complete stranger, which, aside from anything else I want to talk about, is just… wow! The fact that I'm not sure if I could do it myself gives me all the more admiration, awe and gratitude for anyone who does.
I don’t know about you, but I want to live in the kind of world where people donate their kidneys to strangers. So to that friend: know that it’s not just your kidney’s new body who’s been positively affected by your choice to donate. Just knowing that people like you exist gives me so much more hope for the world that we live in, and inspires me to be better myself.
What prompted me to write this post in particular was reflecting on how relieved my friend was when she found out that the recipient’s operation was a success. When you think about it, whether or not the recipient’s surgery went well and the kidney was accepted doesn’t change how good a thing it was to do. It doesn’t change the sacrifice my friend made, the baseline probability that the donation would be successful, or how important it is for people to do things like this. And yet it feels different, doesn’t it?
To know that you went through all of that, did the big scary thing, had the operation, and then it all failed and the kidney ended up in the rubbish bin, that the recipient didn’t make it or needed someone else’s kidney anyway… it would feel completely different. Devastating, even. Like you went through all that for literally nothing.
It made me stop and think about how we view life decisions generally. For some years now, I’ve tried very consciously to judge my past decisions based on the information I had at the time, rather than the outcome.
I’ve done a lot of research into the stats on giving birth at home versus in hospital. It turns out, if you give birth at home, your chance of having a complication is much lower (for a number of reasons, one of which is your lower risk of unnecessary medical intervention). If you do have a complication, however, the mortality rate for both you and baby is higher. The end result is that your mortality rate, whether you give birth at home or in the hospital, is pretty much exactly the same (it can be a little higher or lower depending on certain factors of your pregnancy), while your risk of complications is lower at home in almost all cases.1 In my view, and especially given how I think I personally would feel at home, surrounded by lovely trees and nature, compared to in a hospital room, it makes more sense to be at home.
But then I picture a scenario where something terrible goes wrong. How easy it would be to say, "I should have given birth in hospital," and blame yourself for that for the rest of your life, when actually, you just experienced a highly improbable event. It doesn’t change the fact that you made the right decision with the information you had at the time.
Side ramble on measuring outcomes in forecasting
I work in forecasting science, and the way that we score forecasters is based on what actually happens in the world. That’s the only way we can judge! But how an event resolves doesn’t actually give us any information on how accurate the forecaster was about the baseline probability at the time of their forecast. When an event happens, we have no way of knowing if there was a 0.1% chance it would happen, or a 99.99% chance, at the point when the forecast was made.
One way forecasting can be impactful is in reducing existential risk. We might forecast the probability that AI kills off the human population by 2100, conditional on different actions we could take today. Thinking about how you measure the impact of those actions is super interesting. Say you implement Policy Z, and that reduces existential risk from 0.1% to 0.01%2. You’ve made a huge difference there – that’s a 90% reduction in risk.
But at the end of the day, either AI causes extinction, or it doesn’t. In the world where I implement Policy Z, humanity is most likely safe, but in the world where I do nothing, it’s still most likely that the apocalypse is averted. Has Policy Z actually made any difference?
On the other hand, even if I reduce the risk to 0.01%, there’s still a chance that the apocalypse happens. If I’ve reduced the risk by 90% but it still happens, have I made any difference there?
The numbers we use to represent impact, while by no means pulled out of thin air, still feel strangely arbitrary. You might say, okay, I reduced the risk of AI-caused extinction by 2100 from 0.1% to 0.01%. I’ve basically just saved 0.09% of 8 billion3, aka 7.2 million, lives. Go me!
It’s a helpful metric, but it feels kind of meaningless when, in the real world, either the apocalypse happens, or it doesn’t. I haven’t actually saved 7.2 million lives at all. And that doesn’t even start taking questions like ‘what if we get hit by a meteor before AI has the chance to kill us?’ into account.
I don’t have a proposed solution here, these are just some rambling thoughts that popped into my head as I was thinking about judging decisions based on their outcomes!
Back to the main point…
As usual, I’ve gotten rather off topic (in a hopefully interesting way!). Reflecting on these things was a reminder to me to focus on being content with the decisions I've made, however things turn out in the end. Of course, we should learn from the new information we gain once the questions resolve, but let’s not mistake hindsight for information we always had access to.
A few years ago, shortly into starting a new job, one of the people I worked most closely with had a big mental breakdown and became completely toxic and terrible to be around. When I was looking for my next job, I started over-analyzing each interview, wondering what red flags I could watch out for to prevent this from happening in future. The truth is, there was no way I could have predicted that turn of events. It wasn’t a poor decision on my part – improbable things just happen sometimes.
Another thing I try to do is to consciously have this conversation with myself beforehand, especially for big decisions or where there’s a small risk that things could go terribly wrong.
Take the birth example. I’m not about to give birth anytime soon, but when I do, unless there are indicators that I’m at higher risk of complications, I would almost definitely want to do it at home. Beforehand, I’d want to have the conversation with myself and the people around me where I say, “look, in the very unlikely case that something does go wrong, we’re not going to view this as having made the wrong decision. It’s not just a whim, it’s based on the data. Based on the information I have, this is the right decision for me.”
There’s always the small probability that something does go terribly wrong, and an even smaller but very real possibility that it’s something that could have been prevented by being in a hospital. Knowing how easy it is to hold yourself in judgement, it really does help to have these conversations, even if it’s just internally (or better yet, written in a journal or a letter to yourself).
It’s hard to think this way all the time – we humans are not the most rational of creatures! – but it’s definitely worth bringing a little consciousness to, where possible. It’s just a more peaceful way to view life.
Forecasting is not a concrete science, especially when it comes to looking at events this far in the future. The baseline probability that AI causes humanity to go extinct in the next century is very much up for debate, let alone the conditional probability based on different actions or policies. I’ve used these numbers as they’re easy to work with, but don’t read anything into them!
Not actually sure what population measure would be best practice to use here. Today’s? The forecasted population in 2100? Halfway in between? 8 billion seems reasonable.