I don’t want to write about medicine, but I started this newsletter with an anecdote about medicine so let’s explore and see if we get something interesting. (SPOILER ALERT: we will.)
I recently had dinner with a doctor whose chief complaint was that one of her patients was ‘non-compliant’.
“She wears the damn thing [glucose monitor], but I just don’t understand why she won’t look at it”.
Doc was genuinely perplexed, but her lack of understanding was mostly rhetorical. She didn’t want to hear what I had to say, which would have been
“GIRL, that woman don't give a damn what you say - she wants to feel like she has control over her own life. She wants autonomy, or at least the feeling of it -just like you do. “
I tried to keep this in mind as she went on.
“She likes to keep her blood sugar high before bedtime,” she scoffed, shoveling chicken biryani onto her place.
“Understandable.” My reply was met with a furrowed brow.
Doc expected robotic commiseration from her strange corporate dinner companion just as I expected my new physician friend to think deeply about her patient’s desires, or if not - at least to have an understanding that nocturnal hypoglycemia can cause nightmares, sweating, headaches, or even seizures, but whatevs!
“She’s probably afraid of waking up every few hours…or not waking up at all”. Doc shrugged, and in her eyes, I saw what Heidegger and Marcuse were talking about. How we are paradoxically subjugated by the very technology and advancements that were supposed to free us and help us see the world for what it is. Doc been swept away by rAtIoNalITy. She knew the rational thing to do (take your fuckin’ insulin, ho), and any other behavior DOES NOT COMPUTE. Again, if I could have said what I wanted, I would have said, “Friend, it’s giving ‘don’t let morality get in the way of being a good person’”.
But again I am hamstrung. This is why I find medicine boring, but the intersection of health and autonomy kept me engaged in this interaction. Because…if we don’t have autonomy over our personal health… do we have it at all? We all know that our therapists will tell on us if they suspect a planned exit. We also see the state gets to decide if you may remove a fetus from your body, and compulsory medical intervention was forced upon us just five short years ago during COVID. The most interesting thing about medicine is how it was originally meant to give us more freedom from death and illness, but since we’ve begun to conquer it, medicine is increasingly being used to strip us of our autonomy… because if you can have “health” you SHOULD want it.
I’m not convinced objective morality exists, but if it does, I suspect it would reflect a deep, but paradoxical desire for autonomy... and the funny thing is we don't even know what “autonomy” is.
Autonomy, who?
So, let’s have a little fun here and approach the idea of autonomy from multiple angles:
Existential/Epistemological
One view of autonomy is coherentism: a person is autonomous if their motives align with their values. By this logic, Doc’s patient ignoring her glucose monitor but going to the doctor would be a misalignment and she’d be lacking autonomy. Which is weird because her behavior strikes me as rebellious, but maybe she’s just dumb.
Another idea (to deal with dumb people?) is the reasons-responsive autonomy where someone is autonomous if they respond to “sound” reason. So doc’s patient could be autonomous if she listens to Doc’s “sound” advice, but who decides what’s “sound” advice? Sure we can calculate the medical benefit of regulating blood sugar, but how do you measure her comfort? Medicine attempts to do this, but it’s a futile attempt - which is what annoys me most about medicine. Bad logic doesn’t preclude autonomy, and logic has its limits. Aristotle thought poor deliberation made someone subhuman, but where’s the line? Does doing dumb shit just to feel autonomous make you dumb?
Then there’s Sartre’s idea that we’re condemned to be radically free which means that Doc and her patient are both totally free and have to accept the consequences of their own actions. Which is also weird.
The opposite would be the opinions of thinkers like BF Skinner and even Baruch Spinoza who have a more deterministic outlook: we have zero autonomy. Everything we do is shaped by nature. Personally, I find this style of argument to be nonsensical. A system can’t be complete without contradictions, so therefore we have to have some level of autonomy.
TLDR: Philosophy doesn’t know what autonomy is but most arguments point to humans having some level of autonomy, and it looks like having a shred of consistency between your actions, thoughts, and desires.
Physiological/Psychological
If someone is hooked up to a respirator, on a scale of 1-10, I’d say they’re pretty much a two. A nine might be that of the smartest, most intuitive athlete at peak physical condition. On a temporal plot, autonomy would be an inverted U-shaped curve, where we start with none (name a fetus who asked to be born) and end with none.
We can’t measure our autonomy, but we don’t need to as long as we stay in the middle. If you’re hooked up to a respirator and then you get better, you go from 2/10 to somewhere much higher. If you die… well, 0/10 (do not recommend). As long as we’re alive, what we’re most concerned with is the feeling of being autonomous. You could push yourself to your physiological limit, but how would you know you’re there? It’s like Zeno’s paradox between you and death. Best you got is maybe taking amphetamines or some other Limitless-esque drug and feeling limitless. But the second you try to hit your Autonomy(Max) and jump off a building you crash out to zero.
TLDR: worst case is having close to zero autonomy and still being “alive”; best case we can feel like we have total autonomy right before we approach the limit.
Mathematical
I could work with the above description and try to write a Gaussian or inverse sigmoidal function to describe autonomy, but I doubt you’re interested in that. If so, lemme know…The easiest way to think about autonomy mathematically would be in a closed function where x = f(x). x depends on the function of itself rather than something outside of it.
However. It ain’t that simple… stay with me.
There are two issues with autonomy in mathematics.
Russell's Paradox: the set of all sets that don’t contain themselves. If I make a list of all liars and don’t include myself…it signals that I’m a liar - because a liar would lie about being a liar. If I do include myself, I’m telling the truth, and that would make me a liar about being a liar.
Godel’s Incompleteness: any formal system complex enough to include arithmetic can’t be complete and consistent. So, any system that is self-contained (x = f(x)), will always have true statements that can’t be proven.
The second you call x = anything you’re outside the function, but just how the set that is our body can’t feed itself… a function needs input.
TLDR: We rely on externalities, and therefore autonomy can never be total.
Computational/Cybernetic
Alan Turing came up with this thing called the Halting Problem: there’s no way to write a program that will always tell whether another program will eventually stop running. It’s like a computational liar’s paradox. It means one algorithm can’t have total control over another, but also that even an “autonomous” agent can’t have full control over its destiny either…lest it fall into an infinite loop of deliberation.
TLDR: FULL AUTONOMY DOES NOT COMPUTE.
Moral
Immanuel Kant argues for what we ought to do with the Categorical Imperative, based on the autonomy he thinks humans have. He believes
•man is bound by his duty to laws;
•all man is subject to laws—universal laws—legislated by himself,
•all he bound to is to act in accordance with his own will,
The thought of him only as subject to some law or other brings with it the need for some interest that will pull or push him to obey the law—his will has to be constrained to act thus and so by something else—because the law hasn’t arisen from his will…It might be the person’s own interest or someone else’s; either way, the imperative always had to be conditional, and couldn’t serve as a moral command. I shall call this principle…the principle of autonomy”
-Groundwork for the Metaphysic of Morals, Kant
So, we act according to our own, rational, self-imposed laws, according to Kant. If we look at the mounting evidence above, he’s wrong about this (just like how he’s wrong about women and children not being able to make their own laws…), but it’s only because he believes God mysteriously bestowed the apex of rationality in human decision-making, and we’re not perfectly rational or autonomous. We are not all able to act according to the laws we give ourselves. Often, we fall victim to our own desires or those of others. However, we do enjoy the idea that we’re autonomous, and so we aspire to this and even if it’s impossible, that has to count for something.
TLDR: We want to believe we make our own rules, but if you really believe this, please see your local metaphysician. It’s time for your noumenal check-up.
Political
Politics is just an extension and higher form of self-governance (Aristotle), and the degree to which we have political autonomy has more to do with the system we live in than anything else. Some systems allow for more autonomy than others, and with our persuasion being toward autonomy, we mostly favor those that seem to allow for maximum autonomy. Democracy at least appears to be the most blatantly autonomous system, but if you’re dead set on this, I dare you to read One-Dimensional Man. Political systems are one of the biggest limitations on our autonomy, regardless of the system we enjoy - or don’t. Second to death, the government has the most control over the limits of your autonomy.
TLDR: Again we want autonomy but it’s unclear how much we’re are granted.
TLDR-TLDR:
Using a bit of operational coherence, it seems that from multiple domains, autonomy is rooted in uncertainty and desire. We’re quite uncertain about it, what it is and how much we have, but we’re happiest when multiple possibilities exist and we feel we have the option to choose one.
“I could quit any time I want.”
“I could take my insulin, but I don’t want to.”
Autonomy Two Ways
My poor physician friend wasn’t ready for a discussion of this depth. She just wanted to feel like she was doing a good job as a physician. If I could go back I’d offer her Autonomy Two Ways, both paradoxical:
CONCENTRATED Autonomy: requires a lot of willpower and acceptance of the fleeting experience of autonomy.
NEBULOUS Autonomy: requires no willpower, and you can have as much autonomy as you can bear as long as you're willing to accept uncertainty.
Neither way achieves absolute autonomy, but both grant autonomous feelings.
To achieve Concentrated Autonomy, we must embody Ethan Hawk’s character, Vincent from the movie GATTACA. He had only two moments of autonomy in his entire life: 1) when he beat his “genetically superior” brother in a swimming match and 2) when he achieved his dream of going to space. The first moment established the discipline and hypervigilance to craft a life of his own choosing (well, a moment) - against all odds, but to achieve this momentary autonomy he would have to acclimate the rest of his life to a rigid, compulsory process - literally cracking his bones and extending them to increase his height to match the genetically enhanced doppelganger whose identity he purchased. His life was less about possessing autonomy and more about the habitual, compulsory pursuit of it.
Man’s pursuit of the sensation of autonomy has led him to conquer many things, but there’s always been a price to pay. We’ve gotten so good at using our discipline to conquer things that we’ve automated things like insulin monitors so we don’t even have to be disciplined, but now Doc’s patient pays the price of being stressed and nagged to death by her device and monitoring physician. As C.S. Lewis declared, in conquering nature, the price we pay is that we also conquer ourselves.
Vincent felt compelled to defy the forces that sought to strip him of his autonomy, and so paradoxically lost his autonomy. Had Vincent embraced the forces against him, he would likely never have gone to space, but he also might have lived a more comfortable life, and possibly achieved something wonderful for himself just as good as going to space. Doc’s patient might set up her glucose monitor and automatic insulin pump so that she doesn’t have to worry about it. Sure, she’s giving in to the will of the machine, but maybe there’s a benefit. In this way, giving up your autonomy might create more.
In either case nebulous or concentrated, as long as there’s uncertainty there’s a possibility for autonomy. If Doc’s patient accepts uncertainty and just lets the insulin pump to its thing it’s possible she might free up more autonomy for herself. If she rejects the beeping of the machine, the autonomy she experiences is in that moment, but risks poor health.
Free to be Unfree
So, if you’re wondering which mode of autonomy I choose… take a look at this piece. I said at the beginning that I didn’t want to talk about medicine. I felt compelled to get something out of this event other than a medical interaction, but I also embraced the idea of seeing where things went instead of trying to defy my societal constraints. I started writing this weeks ago and had to force myself to keep coming back to it. When I did, I let myself freeflow, writing 10x more than I’ll publish. I freely chose to bind myself to this piece. It took way longer than I hoped, but overall I feel good about the conclusions I’ve come to, and I feel lucky to enjoy the concentrated autonomy that comes from finishing the piece, and also the nebulous autonomy of letting myself meander through it. I don’t think there’s a universally correct way to go about it. I think when you weigh the options you see that neither method achieves absolute autonomy, because it’s not possible. I accept that I am never going to fully free, nor completely unfree. I accept the paradox of autonomy, and that’s about as free as you can get.