Aging as a Problem

Warning: very long post. I try to outline the concept, value and current status of addressing human aging as a fixable biomedical problem. Would this destroy our society? Probably not.


Getting older is the single biggest risk factor for most scary diseases (cancer, heart disease, Alzheimer’s…)1. Such diseases, caused primarily by aging, kill about 32 million people per year, or 87 thousand people per day. The simple graph below compares this to the bloodiest conflict in human history, as well as the most violent incident of terrorism:

Aging kills

To be fair World War II didn’t involve the entire world, and total population was smaller back then. But the 3.5 fold scale should adequately balance that. Try to visualize WWII raging, people dying in agony; if there was a chance to end it, would that be worth trying? OK, Stalin may have had a valid point about millions of deaths being statistics rather than tragedies. But it’s not hard to spot tragedy caused by aging: aging will kill your parents, and will force your children to watch you grow senile and forget who they (or you) are. They will likely watch you in a hospital bed that eats away a BMW-sized chunk of their savings every month2, and when you are gone will feel the absence of one of the most important things in their lives.

Seeing aging as a problem is of course not new, and throughout history different people have tried different methods to avoid it. But the scientific method, which has been so rewarding for us modern humans, has only been applied to aging for perhaps sixty or eighty years. And in fact, in the last two decades researchers have been able to slow down the aging process through genetic manipulation, drugs and environmental factors3. Almost all such work happens in model organisms (mainly worms, flies and mice), because they allow genetic modification and have short enough lifespans to make controlled experiments possible. This lack of anthropocentrism should not discourage us, however, since the same is true for almost all biological research and drug development. We are in fact surprisingly genetically similar to other animals: even fruit flies have parallels for ~75% of genes involved in human disease, and the biological pathways are even more conserved in terms of function. Thus aging is now de facto a modifiable property of organisms, though this has yet to be properly demonstrated in humans.

Caloric restriction rhesus monkeys

Despite this progress in understanding and modifying the process of aging, there is no clear-cut definition of precisely what aging is. This is in large part because the symptoms of aging (such as wrinkles, or age-related diseases) are often conflated with the underlying biological process4. In the research community, one currently favored definition is “loss of the body’s capacity to return to homeostasis after environmental insult”. A more traditional definition might be “comprehensive and progressive decline of physiological function”, and this conveniently serves as a decent working definition to distinguish aging from specifics symptoms: does a given treatment alleviate a broad spectrum of symptoms, or just a few? And moreover, does it affect “normal” individuals, or only those suffering from a specific disease? Another reason we don’t have a clear definition is that we still don’t know exactly what causes the human body to decline with age. A number of theories have been proposed over the years: some have fallen by the wayside, while others remain popular despite evidence that they do not offer a complete explanation for the aging process (e.g. free radicals, or telomeres). More likely aging is multifactorial, although the different factors may converge on certain biological processes. Work is ongoing to identify measures of physiological age (as opposed to simple chronological age), of which DNA methylation has shown the best results so far.

Aging may seem universal because the animals we most commonly interact with (cats, dogs, livestock) go through much the same process as humans, albeit much faster. But there are many examples in nature of qualitatively different aging: tortoises can live much longer than humans, but more importantly their rate of mortality does not seem to increase over time. In other words they do age, but as far as we can tell they don’t undergo the same process of deterioration. Note that this doesn’t mean they never die, just that their probability of staying alive and healthy is constant year to year. This may also be true of lobsters, who also do not seem to lose fertility with age. Though less attractive, opposite examples also exist: salmon and octopi mature, reproduce and then die abruptly rather than undergoing a gradual decline. Perhaps more remarkably, aging is not strictly a unidirectional process: some members of the turritopsis and hydra genuses do grow old and decrepit, but are then able to return their entire body to a youthful state (here is an amusingly over-the-top story about them). Axolotl salamanders and planarian worms are famously able to regrow large parts of their bodies, which is a similar process of turning old cells into young ones. Indeed, from a biological rather than individual point of view, humans routinely perform the same feat: the egg of a middle-aged woman has the capacity to produce a complete human body where all signs of aging have been erased. Similarly, the 2012 Nobel Prize in physiology recognized the demonstration that any cell in our body can be turned into a stem cell. Rejuvenation is thus biologically possible in humans, though we are still far from understanding the barriers that keep it from happening in our adult bodies.

Long-lived animals

The media can sometimes make it seem as though there’s a focused effort by brilliant scientists to solve the problem of aging (and all other problems, for that matter). Unfortunately that’s far from true. The biogerontogical research community is growing, thanks in part to aforementioned progress, but is still very small. I don’t have actual numbers, but I would guess the number of researchers is on the order of thousands. That is, comparable to a single mid-sized company. As another way to size things up, the National Institute of Aging is by far the primary source of research funding in the US, and has a budget of just over a billion dollars. ~20% goes to non-research expenses, and about a third of the remainder to social and geriatrics research. So we’re left with ~0.6 billion dollars. This may sound like a lot, but it is only ~12% of the National Cancer Institute’s budget (even though aging is the biggest risk factor for cancer). The military budget is more than a thousand times larger, but perhaps more critically the total US healthcare spending is at least one hundred times the entire budget for biomedical research (of which the biology of aging is ~2%). This despite the fact that a huge proportion of the healthcare spending is due to age-related diseases2. I won’t discuss this allocation here, but simply emphasize that the cavalry is not on its way. A smallish number of researchers are working pretty hard, but regardless of their talent and effort scientific progress is not straightforward; the fact that you’re in unexplored territory inevitably implies many false starts and detours. With the current setup, real progress on avoiding the problems of aging is likely to be slow.

In summary: Negligible aging is biologically possible. Barring extinction of the human race, we will eventually turn the implausible-but-possible into reality, as we have for such miracles as flight, space travel and wireless communication. Whether this will happen in 50 or 5000 years is impossible to say for certain, because it depends both on our efforts and how many unknown unknowns we run into. At present only a minority of research is aimed at modulating the rate of aging itself, with the majority addressing individual age-related diseases or trying to understand specific cellular mechanisms that seem to be involved in aging. Some would argue that this should change, although such research is undeniably providing clues about aging itself. Regardless of allocation, more resources directed at this problem would unquestionably accelerate progress5.


But what if we were to get there? What if we developed technologies that eliminated disease and death completely. In reality there would never be such an abrupt switch, but rather a continued development of things that add a few months or years of healthy living (as we have already been doing throughout modern human history). Thus one common fear, that life extension would mean additional years of frailty and sickness, is rather unfounded; as a biologist, it’s simply inconceivable that we would continue to find ways to keep a profoundly broken organism alive. Modern medicine does have tools to sustain life at its very end, but although the emotional impact of these final days is great, the actual period of time is tiny compared to the added years granted by treating disease in mostly healthy individuals. Come old age and multiple morbidity, the effectiveness of treatments drops drastically. And indeed the premise of research into aging, rather than specific age-related diseases, is to slow down the process that causes such diseases and thus maintain the youthful state.

So we can rest assured that the fate of Tithonus is not a realistic consequence of aging research. And as I’ve addressed in another post, neither is overpopulation. So these common fears are actually not things we need to worry about. But of course there are a number of likely consequences from significant life extension, some desirable and others challenging.

The most obvious, perhaps to the point of under-appreciation, is that we would not suffer the loss of loved ones. But beyond the obvious, the vacuum left when someone passes away extends beyond personal loss; in the modern world, what you learn is indubitably a greater factor in your ability to contribute to society than is the infinitesimal genetic improvement that comes with being part of the next generation (in fact I would argue that genetic evolution is no longer applicable, but that’s a separate post). Particularly if we subscribe to the notion of combinatorial creativity, a person who continues to accumulate knowledge and is not subject to dementia or other debilitating effects of aging would perpetually and increasingly outpace even genetically superior persons that suffered from a complete knowledge reset every few decades.

And this effect of keeping everyone healthy and active would be on top of the direct healthcare savings that would follow from reducing, let alone eliminating, age-related disease and infirmity (recently estimated to be trillions of dollars, just from the effect of currently available drugs on animal models). It’s highly possible (and discussed elsewhere) that the current demographic trajectory will lead to the collapse of traditional retirement plans in the foreseeable future, which would evidently be alleviated by simultaneously preserving people’s ability to do useful work and reducing their demand for healthcare. But honestly this is a bit misleading, because if we did not age there would almost certainly not be a concept of ‘retirement’ involving indefinite cessation of productive work.

This is an example of something that would be a radical and not strictly positive change in a post-aging world. On the other hand, if we remember that we would not be ‘worn out’ at this age, it’s not necessarily strictly negative either: The basic premise would be the same as now, that we work enough to support periods of non-work. We would only be changing the timing and frequency of such periods (which some people already do). At present, it is very rare for people to have more than one or two careers; there simply isn’t enough time to get good at many things. But if “retirement” meant some years of complete separation from your profession without any acute loss of ability to support yourself, you might decide to go in a different direction upon reentering the ‘active work’ phase. Without physiological decline, you would be able to have as many or as few careers as you had appetite for. And here’s it’s worth noting that physiological decline is not limited to disease: Post-aging, we (women especially) would no longer need to balance the choice of parenthood against building career capital or pursuing other life goals. While I’m not qualified to offer personal experiences here, I would estimate having the option of raising children whenever it suited your lifestyle of inclination to be exceptionally liberating. And breaking the confines of our biology in this manner would empower more than reproduction: no longer would professional sports (or dance, or military service…) be the exclusive realm of people in the early stages of life.

Of course, opening up such possibilities would break some of the assumptions we currently make in social interactions. Unable to correlate wrinkles with wisdom, we would have to do a lot more research to discover other people’s situations and perspectives. There would likely be serious (or complete) dilution of recognizable ‘life stages’, and quite likely we would have fewer spontaneous instances of social groups with a shared identity (e.g. college). The ‘elders’ of a family would not be easily recognizable for outsiders, and without attrition families might expand indefinitely (albeit likely much slower). It’s hard to predict how our ties to blood relations separated by many decades of experience might feel.

Returning to more tangible issues, one challenge that would be alleviated by extended lifespans is that of absorbing the ever-increasing sum of knowledge in the world. In 1940, ~25% of Americans finished high school, <5% college. In 2014 those numbers were 88 and 32% respectively. Even since 2000, the fraction of US population with at least a college degree has risen from 25% to 33% (and graduate degrees from 13 to 18%). The average age of college attendance is also rising. In other words, the part of our lives deemed necessary for meaningful contributions to society keeps increasing. One solution (which is already taking place) would be increasing specialization, in other words allowing ourselves to be ignorant in many areas. But if we don’t want this taken to extremes, either our lifespan or rate of learning must be increased in pace with the amount of information worth knowing.

On the other hand, I sometimes hear the objection that with endless time we would end up trying everything and grow bored. This does not seem relevant, for the simple reason that the human race is creating faster than any individual’s capacity to consume6More than a million books per year are published in the US alone, so to even stand a chance of running out you’d have to read three thousand books per day, under the incorrect assumption that the output won’t continue to increase. Similarly, there is no real end to creative processes, nor to improving your ability in whatever area. Certainly you could grow bored of it, but not because you “finished”. A more plausible objection is that without new blood there would be a dearth of innovation and creativity. This does seem to be a general trend, although it is not without exceptions; I think Tempest is an excellent album, and Dylan was 71 when he put that out. Plenty of examples exist in other fields as well. I don’t really have any hard data on this topic, and would not venture to explain what drives creativity in people at any age. But I wonder to what extent a drying up of ideas could arise from a choice not to get into new things “this late in life”…

At this point we’ve covered a lot of possible consequences, so let me conclude with a few short answers to objections that I hear with some regularity:

Curing aging is unnatural: Only if you don’t consider humans and the things we make to be part of nature. Either way, the same argument is equally applicable to correcting poor vision or brushing your teeth. Or to wearing clothes, for that matter.

With endless horizons, we would not be motivated to do anything: Possible, I suppose. But as far as I can tell, a decades-off demise doesn’t really seem to guide our daily routines. Overwhelmingly, our actions seem to be motivated by “I want people to admire me“, “All my friends are going to college“, “This person is nice/mean to me“, “I want that person to get naked with me“, “That guy makes more money than me” … Americans work an average of 47 hours/week, even though less than a third of this would provide food and shelter. We compare ourselves to our peers, and compete for whatever is scarce. When material needs are fulfilled we eagerly compete for scarce intangibles, such as love or fame. And the absence of death would not change this.

Curing aging goes against God’s plan: I am still unable to understand how anything we humans do could legitimately challenge an omnipotent God. If (some) God exists and this goes against his plan, then our attempts will simply fail. No need to worry.

“Death does not concern us, because as long as we exist, death is not here. And when it does come, we no longer exist”: OK, this argument from Epicurus was not actually delivered to me in person. The idea persists, though, and it may even be well-founded. But accepting it does not imply that we shouldn’t seek to make each of our future moments of existence as desirable as possible (which Epicurus would no doubt have agreed with). Whether or not we consider death to be part of our life, curing aging would make the experience better.

To summarize this section, most of our worries arise from imagining a future without aging but absent any other changes from the societal structure we have now. There is no question that things would change drastically if we cured aging. But this cannot be used as an objection, because society is certain to undergo drastic change regardless. Moreover, drastic societal changes were also incurred by codes of law, the steam engine, computers, the internet… Yet now that the damage is done, few of us would want to return to the society that preceded these changes. In some cases we can fairly imagine that change would have been resisted if the full consequences had been known in advance (e.g. pollution post industrial revolution), but nobody is suggesting that we reverse even these examples. 

If you happened to be born female a few hundred years ago, your lot in life was to be pretty, perhaps play an instrument, and hope that whoever you were married off to wasn’t abusive. From our present perspective, this lack of choice seems utterly barbaric. Now imagine that you are restrospecting from a future society that has cured aging: would our lack of choice regarding sickness and death be considered similarly barbaric?


[1] Age outshines even diet (saturated fats etc.) for heart disease, although for diabetes it is second to diet in importance. For lung cancer smoking is a bigger risk (with aging 2nd), but aging is #1 for cancer as a whole. For the various neurodegenerative disorders and dementia there’s absolutely no comparison. And of course there are a miscellany of non-lethal diseases where aging is also the biggest risk factor (e.g. arthritis).

[2] In terms of society, on average about 20% of lifetime medical costs (including birth-related) occur before age 40, ~30% between 40 and 60, and half after age 60.

[3] The effect size is generally larger in simpler organisms. In a somewhat relatable organism, mice, genetic manipulation (of growth hormone pathways) has doubled the healthy lifespan, drugs extended healthy life by ~10% (Rapamycin, Metformin) and caloric restriction 30-40% (although there’s controversy about whether this effect is specific for inbred strains of lab mice).

[4] Not a single “anti-aging” cosmetic sold by under well-known brands has any effect whatsoever on aging. Some may remove wrinkles, but that’s like saying that morphine cures cancer because the pain disappears. Some existing drugs (e.g. rapamycin, possibly NSAIDs) may affect the aging process, although they are not sold for this purpose. A huge number of supplements claim to affect aging, and the vast majority are undoubtedly bogus. A few are based on legitimate scientific studies, though without human studies it’s not clear that they will work as advertised (nor what the relevant dose is). I should note that aging is currently not recognized as a disease by the FDA, which means that one could not get a drug targeting aging itself approved for sale. It is possible that the first effective drugs against aging will be sold as supplements until the regulation changes (though it might change rather quickly if we have something in hand that clearly works).

[5] One successful example of this kind of impetus is the Apollo program, where a large influx of funding and manpower in the 1960s allowed the US to leapfrog the USSR space program.

[6] This ever-evolving world also discredits the concern that a post-aging human race would stagnate due to a lack of natural evolution. I will go over the details in a separate post, but in short the fact that the modern world now changes much faster than our reproductive cycle means that the mechanism of Darwinian selection is approaching irrelevance: increasing the successful reproduction of individuals whose genetics are better suited to the environment will not improve the next generation when the environment that the child grows into is very different from the one that favored its parents.

Attention

Prey animals are alive by default. As in, they can forage from an unresisting and generally abundant food supply. Aside from disease, they are killed by changes in the environment and by predation. Because of this, they need to constantly monitor their surroundings for such threats.

Predators are dead by default. If they do not bring down prey, they starve to death. But they’re much less likely to be killed by other animals. Consequently, they don’t need to pay attention to anything other than their mark. If they succeed there’s a brief period of indulgence, but they soon need to get back on the prowl.

Attention

I think we can view human behavior through this lens. A stable job that covers your expenses, perhaps even married = prey. You’re fine unless something upsets the status quo: a layoff, a depression, war… So it makes sense to follow the news and stay abreast of what’s going on in the world. What are people doing, who’s going to be president? But if you just started a company or signed on as a professional athlete, you’ve entered do-or-die territory. You have one very clear goal, and do not need to worry about anything else for the time being.

It’s hard to use this imagery without overtones of admonishment. I mean, who wants to be called prey? But the intention was merely to demonstrate two modes of attention that are appropriate for different situations. Two states of existence, between which we are free to choose. Truth be told I think most of us strive to achieve the ‘prey’ state, at least from our mid-20s on. And it doesn’t seem unreasonable that we have been evolutionarily selected to pay attention to everything. But biological evolution could not anticipate the 21st century, and so it might be worth scrutinizing our need for prey-mode from time to time.


NB: I am not a naturalist and the above could be bogus as a description of nature. If so, I hope that it may still be useful as a description of life.

Thoughts // Patterns – Part 1

Some people might object to the notion that computers can have thoughts. Indeed, some did so in the wake of my previous post. The popular argument went along the lines of “humans can have original thoughts, computers only do what they’re programmed to do”. For the sake of further discussion, let’s examine the premises of this argument.

The first is that humans have original thoughts, which I suppose implies free will. But let’s start with the second premise: that computers don’t have ‘original’ thoughts, only predetermined processing of input. One could respond “go ask someone whose computer just crashed what input they used to make it do that”. But even if we chalk such occurrences up to thermodynamics. it’s not as easy to dismiss some of the seemingly creative products of AI development (like the image shown below). Of course you could argue that this “AI” is still a program, only with enough layers of complexity to produce unpredictable results.

Computer-generated image on the theme 'pagodas'.

Image generated algorithmically from random noise, on the theme ‘pagodas’ (by Google).

Which naturally brings us back to the first premise, the originality of human thoughts. I don’t have anything original to add to the fundamental debate of free will vs. determinism, so let’s limit ourselves to a weaker question: Can we confidently say that our conscious thoughts do not arise from a very complex ‘program’?

One objection from the field of biology is that we haven’t yet learned to read the instructions of this ‘program’ we’re born with, so how could we say what might arise from it? Sure we’ve sequenced the human genome, but until we know the function and regulation of each gene (plus the epigenetics and protein interactions) all we have is a text, without punctuation, in a language where we don’t have much grammar or vocabulary.

Another confounding fact is that a neuroscientist can put you in a scanner and tell you things that you didn’t know were happening in your brain. Heck, a trained poker player can do that. And if we’re only partially aware of our own thought processes, how well-founded can a subjective notion of having original, voluntary thoughts be? It doesn’t seem like such un-thought thoughts are confined to background processing. One can easily point to instances where we act with little guidance from our conscious thoughts, or where we have thoughts which we can’t control (e.g. dreaming, by day or by night). I’m guessing that at some point you’ve done something that you told yourself you wouldn’t. How could that have happened, if not by your brain sending your body signals that didn’t go through conscious processing? Which engine of motivation thus set you into motion? We could of course argue that you simply changed your mind a moment before acting. But, in my case at least, this doesn’t quite ring true; at times, the conscious desire to resist persists right through the offending acts. Passion, or lust, is a good example. There is a certain moment at which, empirically, things become inevitable. It can be felt throughout the body, a bit like the adrenaline rush of a fight-or-flight response, and it’s safe to say that it renders conscious thought irrelevant.

If we act without thoughts, and think without active control, is it possible that our actions are for the most part spurred by something other than cognition? That, like the elements of Aristotelian physics, we simply gravitate towards some desired state and subsequently create the narrative that portrays a conscious decision. I’ve often heard it said that fish don’t feel pain, let alone fear. Nevertheless, they clearly try to avoid undesirable situations (e.g. being caught). We call this “instinct”, but how do we know it’s different from what makes us run away? Perhaps the only unique trait of human intelligence is that we’ve somehow acquired a module for creating narrative, and thus our experience feels special. This would make human thought unique the way a child is; it has traits that make it unlike any other child in the world, but at the same time it’s just one member of the class “children”.

It’s easy to imagine a goldfish unable to understand our thoughts, but does that imply a clear-cut distinction where we have thoughts and it doesn’t? It’s harder for us to turn the argument around and imagine the perspective of a more advanced intelligence, and perhaps that’s just the point. A proverbial alien observer might see us as more intelligent than the goldfish, yet not consider either one capable of real thoughts. This conclusion might offend us, but how could we argue against it if we accept the goldfish as unthinking?


The catalyst for this post was Elie Maksoud, who by the way takes pretty pictures.

Unthinkable Thoughts

‘‘There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think.’’

Richard Wesley Hamming

Something that is simultaneously trivial and fascinating (to me) is that the deeds of some other person my age will be very different, and sometimes far greater, than my own. Trivial because it’s so obviously true, for all of us. But fascinating because this person has had the same period of time and an almost identical human body to work with, so their accomplishments must have stemmed from differences in their environment and/or thoughts they had that I did not. No doubt the former plays a role, but if we confine our analysis to a college classmate I think we can establish a role for the latter as well: Sitting in the same auditoriums, coming from a similar background, this person somehow achieves a different understanding of the subject matter (and the world).

We apply the term “genius” to those who make important realizations that escaped everyone else, and try hard to explain what made these people special. Why didn’t the Theory of Relativity occur to everyone? While such explanations often emphasize a special combination of talent and opportunity, it also appears that simple birth defects and accidents can produce genius-level ability in the otherwise unremarkable. Based on this, one might propose that our brains normally have barriers which block many thoughts from appearing. But what is the system that determines which thoughts we have? And as a natural extension of this, how do we set ourselves up to have the widest/wisest range of thoughts?

Your thought-subset

As so often happens, Paul Graham has an interesting comment on this: he argues that we’re unable to think clearly about things that are part of our identity (e.g. religion, ancestry, preference for Apple products), and so to expand the range of topics you can productively think about you need to minimize your identity. Thought provoking (*cough*), and a seemingly perfect philosophy if you’re a Buddhist inventor. But it does seem more like a surgeon than a full-on savior: even if true, it only tells us how to remove certain specific blocks from our thinking.

Another proposition comes from an instruction that I wish someone had given my undergraduate self. We students were frustrated with having to cram huge curricula in some courses, and often vented about the folly of closed book exams. The important thing was being able to find information on demand, not memorizing tons of facts you might never need, right? Well, kind of… Now that I’ve spent some time thinking for a living, it’s clear that most of our progress comes from connecting dots. That is, coming up with solutions based on multiple pieces of information. Sure, you could easily look up those same pieces of information, but if they’re not already in your head when you encounter the problem you miss out on the solution. Based on this, a big limitation to what we’re able to think would simply be the quality and quantity of dots already in our heads; the more you already know about, the wider range of thoughts you can have.

To me this seems quite in line with empirical evidence, although it’s also obvious that other factors play a role. For instance, there’s the person who knows a lot of facts but somehow can’t venture into uncertain territory. To quote Hamming again:

If you read all the time what other people have done you will think the way they thought. If you want to think new thoughts that are different, then do what a lot of creative people do − get the problem reasonably clear and then refuse to look at any answers until you’ve thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one.

 

So your ability to think clearly plays into it, as does the amount of knowledge you have to draw on. It seems to me that there’s also a huge amount filtering that occurs before thoughts even enter your consciousness. That is, your brain actually processes a multitude of thoughts for every one that you’re aware of, but most of them are discarded almost immediately. Whatever governs this filtering process must have a profound effect on at least our subjective experience on thinking. I don’t know enough neuroscience to say anything really rigorous on this subject, but intuitively it seems possible that the filter is simply synaptic patterns formed by your past experiences. On a side note, maybe that’s where déjà vus come from: subconscious processing leaking slightly into the conscious domain, so that when the thought is presented to consciousness proper it seems to have (indeed has) happened before.

Such a filtering mechanism would certainly constitute a type of biological limit to what thoughts you are able to think. But one could easily imagine more profound limits based on the physiological wiring of our brains. It would naturally follow that different wiring would allow different thoughts. Returning to the original quote from Hamming, I’m sure most would agree that our deterministically programmed computers can’t think human thoughts. But perhaps they (or future versions of them) can think a different type of thoughts, which we in turn aren’t able to.

In other words, the Venn diagram might look like this:

Your thought-subset,  advanced

You might find it difficult to imagine the types of thoughts a computer would have; I certainly do. In order to come up with a decent answer we would need to examine what constitutes a thought, which I’ll leave for another post. But lest we let the barriers in our brains censor the very idea of inhuman thoughts, I’ll end with this reminder from Schopenhauer:

“Every man takes the limits of his own field of vision for the limits of the world.”

Dying doesn’t really affect overpopulation

When the subject of curing aging is brought up, someone usually responds along the lines of “what about overpopulation?”. It’s hard to define what constitutes overpopulation, since it depends not just on the number of people but also on our fluctuating ability to convert the sun’s energy into stuff we want, and our consumption of said stuff. But let’s assume that with our current consumption, rate of technological progress, population and its growth rate, limits to the Earth’s sustainable output either have already been reached or will be in say the next 100 years.

If this assumption is wrong then the original question is irrelevant anyway. But if it’s right, then it seems intuitively obvious that eliminating the majority of deaths would greatly exacerbate the problem. Obvious, but only because our intuition is next to useless in dealing with anything exponential (i.e. where some amount is multiplied by a constant value for every unit of time). Let’s take a look at two examples of population growth. We’ll simplify by rounding some numbers, and looking only at the number of women while assuming an equal number of mates. In A, every woman has one daughter after a certain period of time (say 20 years), and the daughters reproduce after the same period. Nobody dies. B is closer to our current situation: each woman has two daughters after 20 years, and when her great-grandchildren are born (60 years later) she dies.

Ov

Quite obviously, B results in a vastly greater population because the birth rate is an exponential increase (in this case doubling), while the deaths are a linear product of the current population. And that’s the essence of what I’m trying to describe here: linear effects are usually negligible in comparison with exponential effects.

So how important is the mortality rate in comparison with birth rate? In the graph below I plot three curves based on similar simplified math1. In each case I start with a population of 1000 women, and project 500 years into the future. The red line is our current situation (it’s not quite smooth because we’re rounding everything to steps of 20 years, and nobody dies for the first 80 years): at 20 each woman has the current global average of 2.5 kids, and at 80 she dies. The green line represents the complete absence of death but with the birth rate reduced to 2.38, which results in roughly the same population growth. The blue line is the “extreme” scenario of nobody dying, but each woman having only two kids in her lifetime2.

Population growth scenarios

What does this mean? That the effect on population of eliminating ALL deaths (not just aging) can be abolished by one woman out of every eight having one or two kids instead of two or three. And if everyone stuck with two kids there would be 20-fold fewer people at the end of the period. So the answer to our question is that mortality rate is not at all important relative to birth rate.

Thus far everything is simple math (with a few unassailable assumptions), and the conclusion is not a matter of opinion. It is of course hypothetical that population could feasibly be controlled by a reduction of births, but it seems quite likely3. In fact, I think it’s reasonable to assume that any country that had the capacity to eliminate aging and disease would have very low birth rates. As mentioned, the global average birth rate is 2.5. The EU average is 1.6, the US 2. Japan, Hong Kong, Taiwan and South Korea are all below 1.4. The majority of African countries lie between 4 and 7, which brings up the average. Empirically, birth rates have a very strong negative correlation with child mortality and overall level of education. An example of this effect can be seen in South Korea, which in 1970 had a birth rate of 4.53. Concurrent with its explosive economic growth, this rate dropped to its present value of 1.2.

It thus seems reasonable to assume that any society advanced enough to eliminate aging and disease would already have low birth rates, quite possibly well below 2 children per woman (the blue line above)4. Other societal changes, both planned and unplanned, are likely to affect the population dynamics of a society where death is voluntary. One possibility is that when we gain the right to live indefinitely, we lose the right to reproduce without limits. A simple version would be a default of one child per family, à la present-day China. Another version might be that you can live as long as you want, but when you have children you start your own aging process and expire in say 100 years. This latter idea sounds kind of crazy, until you consider that it’s the situation we have now (plus an option to delay the process). But as we saw earlier, this solution would a greater effect psychologically than on the actual population unless the number of children is also limited. On the other hand, we might see unexpectedly strong effects from simple psychological adaptations to indefinite lifespan. In the absence of an expiration date there would be less reason to have children in your twenties. Everyone, but women especially, would have the choice to postpone parenthood in order to establish a career, travel the world or simply feel fully prepared. A big deal for the individual, certainly, but for the population? Well, if we delay the green line’s age of childbirth from 20 to 40 in the graph above, its final value becomes ~8.4% of the red scenario…

I won’t pretend that I can predict exactly how societal changes will influence population dynamics. Any number of possible (and seemingly impossible) scenarios might play out, with or without aging and disease. But the one scenario that we don’t have to worry about is that eliminating aging will lead to “everything the same, but with more people around”. Dying, it turns out, is not a viable solution to overpopulation. On the bright side this means that you shouldn’t feel compelled to die for the sake of the environment, although it also means that any viable solution will require you to think about how you live.


[1] Nerd-level population projections are available here.

[2] At this point the math is officially oversimplified. The example is true if every woman has 2.38 kids, but it doesn’t work if that’s the average value of some distribution and kids have the same birth rates as their parents: because the growth is exponential, going above the average has a greater effect than going below. But that just means that the real value is even closer to 2.5, so the point stands.

[3] For example, while China’s one-child certainly hasn’t worked perfectly it does seem to have had a significant effect. China’s birth rate is around 1.6, down from ~6 in 1950. By comparison, India’s is 2.5 (also down from ~6). As a result, although China’s population was ~50% in 1950, the UN estimates that India will be the most populous country by 2028.

[4] In addition to the fact that delaying aging would make for a population that is effectively younger, in terms of dependents vs. contributors.

Where does the future come from?

Day by day, small changes are turning our world into that of the future. We don’t really notice it, except when our parents are unable to use their new tech widget. Or, god forbid, the power goes out and we are shunted back to the previous millennium. At such times we might find ourselves perplexed by how things worked in the past: twenty years ago, an appointment to meet someone in the city required you to schedule a time and place to meet, and then actually be there at the agreed-upon hour (Californians in particular might find this hard to accept). Without cell phones you’d be stuck in limbo, waiting for someone who might be in traffic for five minutes or tapping their feet at a misunderstood meeting point. Moreover, you’d actually have to plan out how to get there before leaving your home, lest you subject your rendezvous to such a limbo. Reminiscence aside, the point is that as we go about our lives, worrying about mean people and philosophical ideals, the stage for our lives undergoes dramatic changes.

The future

The future indomitably forces itself upon us, but where does it come from? By the end of this century organ donors will no longer be in high demand, and interacting with digital devices will be through voice and gesture more often than by typing. But who makes it so? By whose authority does the real world turn into the imagined (and sometimes the unimaginable)? One answer would be that it ‘just happens’, that every single person’s actions have this tiny incremental effect that somehow adds up to the progress of the world. Leo Tolstoy would condone this belief. In fact, he spent something like six years writing a book on the theme that “great men” are merely figureheads for the spirit of the time. That if Napoleon had died at Arcole another would have taken his place, and would have been forced to make the same decisions as Bonaparte did. It’s not easy to refute these claims. But it’s also kind of a non-answer, since we can’t imagine the “zeitgeist” substantially doing anything except guiding the hands of men. And without insight into how this supposed force works, we are no further along than when we started our inquiry.

Whether particular individuals matter or not, we can still ask: ‘what category of people bring about the future?’ (be this on behalf of zeitgeist or their own free will). The most common answer seems to be “the people in power”, although opinions differ somewhat on whether membership in this oligarchy is dependent on wealth, political office or corporate authority. As I see it, the assumption here is that after achieving a position of sufficient power, your day-to-day decisions are so influential that they shape the path by which we (the world) encounter the future. It’s easy to see how one would come to this conclusion: I know that my own future is shaped by my decisions, but it doesn’t feel like I have any power over ‘The Future of The World’. It’s obvious that Warren Buffett’s decisions can have broader impact than mine, so perhaps this feeling of impotence doesn’t apply to him. Nevertheless, while the people in power certainly have the ability to affect many lives, I would question whether their power isn’t acting on the present more so than producing the future? Sure, the two are necessarily related, but I don’t know that they’re inseparable. If we look back through history, monarchs don’t stand out as the people that brought about the future. It would seem that more change came through Gandhi than George V (and that Gandhi’s position of power followed rather than preceded this change). Even looking at the present, the idea that the future is produced by those in power seems (at least at times) to be patently untrue…

At least that’s what people in Silicon Valley believe. Google wasn’t started by powerful people, but few entities seem more synonymous with The Future. You could of course argue that Larry and Sergey had entered the group of “people in power” by the time they brought us the future, that Google as an unmonetized search engine wouldn’t have shaped our world to the extent that the company now does. But examples of world-changing technology are endless – the origins of lasers can be traced through a legion of (more or less) humble scientists and engineers, Gould, Townes, all the way back to Einstein, without encountering an all-powerful beneficiary. And it’s hard to argue that putting the Encyclopedia Britannica on a 12 cm disc, restoring vision to the (nearly) blind and Steve Aoki concerts are anything but spectacular representations of The Future. To some people, then, the answer to “where does the future come from” is “from the ones that build it”. Politicians, venture capitalists and society at large represent circumstances that must be properly navigated while you’re making the impossible possible. People like Elon Musk and Ray Kurzweil would obviously fall into this camp (in fact Kurzweil might be called the camp counselor), but so would Francis Bacon and Benjamin Franklin. Part of this view’s appeal is that  it’s tremendously enabling, and these days supporting examples seem to happen rather frequently. At the same time, however, Sergey Brin would probably tell you that building Google would not have been possible in Russia.

Self-driving car

On the third hand (we are talking about the future here), we might ask how the builder decides what to build? How could the hands create what the mind can’t conceive? With this in mind, perhaps the future really comes from those who extend our concept of the possible. In this case it’s not so much made on demand as glimpsed by someone, who then initiates mankind’s slow trod towards making it reality. To quote Steve Jobs: “A lot of times, people don’t know what they want until you show it to them”. Of course this visionary could be an entrepreneur, or a president. But it could also be a humble science fiction writer, or a Hungarian doctorI find it hard to completely rule out any of the three paradigms we’ve discussed, and this last possibility in particular is utterly impossible to test empirically (until somebody comes up with a way to measure ideas). If pressed for a definite answer to the original question, I might postulate that a true Agent of the Future is inevitably a collaboration between influence, invention and imagination. Though without saying anything about the dynamics between the three, this might not be any more useful than saying ‘the zeitgeist did it’.

Perhaps a better way to address the original question is by asking another: Can we somehow measure the fundamental unit of ‘Future’? In other words, is it possible to transcend the “I’ll know it when I see it” definition that the discussion above is based on? If so then this might be the most plausible way to identify the people producing it. But while we get that sorted out, we can at least be confident that  wherever the future comes from, the only way to get there is through tomorrow.

Guest Reflection

Occasionally I’ll ask someone whose mind I respect to share something they’ve been thinking about for a while. I won’t necessarily agree with their statements, but I think someone once said something positive about being open-minded. In this piece my friend Lu Zhang Gram offers his reflections on that most humble subject, the meaning of life…



The End of Life is Love

Why are we here on Earth? The meaning of Life, the Grand Plan, the explanation of it all has probably tantalized young philosopher men and women since the invention of Language itself. As Russian revolutionary Trotsky put it: “If the end justifies the means, what then justifies the end?”

The Nihilists contend that there is no Grand Plan beyond the necessity of death, survival and reproduction, that the meaning of Life is biological life simpliciter and that Reality is a strangely colourless, odourless, valueless mass of atomic particles. Venerable philosophers from Nietzsche to Sartre and Camus saw existence in this light. The unenviable task of Superman was to somehow weld His own feeble, abstract and ephemeral value predicates onto this unfeeling, dark void through a manly display of sheer willpower and in the process create a habitable space for Mankind.

However, as Heidegger points out, this viewpoint is flatly contradicted by the obvious fact that our daily lived experience is teeming with meaning. Unless we are specialized research academics, we do not navigate an everyday world full of neurotransmitters, line integrals and quarks and muons but rather one of mortgages, telenovellas, industrial strikes, corrupt landlords, get-aways, get-togethers, break-ups, heartbreak and all the rest that quietly passes by, day after day. During 99% of our lives, these experiences form the ground of our reality and the little army of valueless, colourless, charged particles conspiring to make reality appear like it contains such things as Minimum Wage seems better suited for Plato’s realm of Pure Ideas.

On the other hand, we may one day wake up fully realizing the certainty of our own Death, that final end to all conscious experience, and this may lead us to question the ultimate meaningfulness of our everyday pre-occupations. We may ask ourselves like Trotsky, like Heidegger, like countless ones before them. In the face of Death, what is it all for? We work to earn our living, but what is our living for? We save our money for security, but what is our security for? We expend our savings on goods, but what is our consumption for? We struggle for our right to freedom, justice and equality – but is it worth it in the end?

We may despair and declare the question of the meaning of life as itself a meaningless question, as Wittgenstein, the tormented genius of linguistic philosophy, was wont to maintain. But I believe there is pattern in the madness. Life is a chameleon who would rather shift her colours to suit her Perceiver than reveal a glimpse of a core, invariant “essence”. Is it not true that when we find ourselves down in spirits, Life seems empty of any purpose, whereas when we are giddy with joy, Life does not seem to need a purpose after all? Then is it not all the more true that Life only appears as an endless stream of valueless particles in our more Scientific moods, as a chaotic profusion of petty worries, doubts and triumphs in our Everyday moods and as a nagging cacophony of philosophical questions in the mood of Angst?

Philosophers with their ever-enquiring, ever-critical minds have traditionally privileged the mood of Angst as the ultimate arbiter of the meaning of our own existence. This seems to me self-defeating, or as Marx would say, doomed to self-contradiction, for once in the state of Angst, one is bound to ask oneself what getting into that state was really all for in the first place? Therefore I would like to propose an alternative vision, one perhaps marginalized due to the sexual make-up of philosophy faculty throughout history. As Valerie Solanas wrote in her infamous SCUM manifesto: “A woman … knows instinctively that the only wrong is to hurt others, and that the meaning of life is Love.”

What happens in the mood of Love? Here I speak of Love in the broad sense, Love of Humanity, Love of the Other, Love of Growth, Love of Ideas and Causes, Love of Freedom, Justice and Equality (rather than love of possession, power or prestige). Something remarkable. Not the bleak hopelessness of despair or the aimlessness of giddy joy. Not the robotic intellectualism of the Scientific moods, the uncritical, un-reflexive conformity of the Everyday moods, or the philosophical masturbation of the Angst-ridden moods.

With Love comes simply: Trust. Peace. The loss of fear of Death. Openness. Communication. Being plugged into something greater than ourselves. Commitment. Getting off one’s armchair and doing something. Patience. Because we do things for their own sake, not to please people or for the sake of other things. Strength. Being brave and vulnerable and open and respectful all at once – because nothing else ever mattered quite as much.

Lu Zhang Gram

Personality osmosis

I believe it was Jim Rohn that said  “You are the average of the five people you spend the most time with”, i.e. that your thoughts and personality (and consequently actions) will be shaped by the people around you. I don’t know if his statement is exactly right, but I’m willing to wager that it’s roughly correct. Which naturally prompts the curious-minded to ask: Why? And so what?

I can see several mechanisms to account for ‘why’. One is that, in my opinion, humans fundamentally want to agree. Or rather that our brains always do what they can to remove us from conflict, one way or another. At the same time, any two people are inevitably going to disagree on certain points, which represents a (mild) conflict that our brains seek to resolve. Many disagreements will be resolved through a bit of discussion, but a few core beliefs might well resist such resolution. What then? Since we’re discussing the people we spend the most time with I’m going to assume that avoiding exposure to the disagreement is not a likely solution. Eliminating further the option of beating our friend to death with a hominid thighbone, it seems to me that our brains would be incentivized to escape the friction by shifting beliefs and assumptions until they are sufficiently aligned with the other person’s. Of course you could argue that spending more time in disagreement simply allows one to rationally appreciate the other person’s point of view, and I fully agree that that’s part of it. But the subconscious ‘conflict avoidance’ behavior shows up in so many other scenarios (and is so evolutionarily obvious) that I think it would be foolish not to ascribe it a part as well.

Another possible mechanism stems from the idea that conversations are an important way for us to process our thoughts. The degree to which this is true for each of us varies1, but I don’t think anyone would deny that in addition to (and sometimes instead of) exchanging information, conversations allow us to permute and examine information we already have. It stands to reason that if we use conversation to process our thoughts (and tend to avoid solitary contemplation when possible), then what we talk about is what we think about. And thus whatever we have the most opportunity to talk about is going to constitute a large part of our thinking.

I don’t know how much this affects our deliberate thoughts, but it definitely plays in when we’re done with work, tired but without a specific agenda. We join our friends or housemates in whatever they happen to be discussing, and this discussion will stick in our heads as we go to bed. The next time we meet these friends, it feels natural to refer back to previous subjects. Before we know it we’ve started caring more about the issues of whatever group we spend time in, and have forgotten some of the things we used to care about. Our values shift a bit, and over time what we know about shifts as well. Conversely, this means that you are less able to develop your understanding of anything that your community is unable or unwilling to discuss (in casual conversation, mind you).

Friends

Which brings us to the ‘so what’. Namely: If the people we spend the most time with shape us to such a degree, should we perhaps make more conscious choices about who we spend time with? Only I don’t think this should take the form of ‘politely edit your friends‘, which is how I’ve mainly seen Jim Rohn’s quote treated on the blogosphere. For one thing, I think this advice is likely to fall flat once you close your browser and try to implement. And as far as I can tell organically developing friendships fare a lot better than deliberately created ones. Not to mention that focusing on your present situation seems quite likely to address symptoms rather than root causes of whatever you feel is wrong with your life. My advice would rather be: don’t edit your friends, but set yourself up to make the friends you need2.

As an example: When I was a teenager I decided that I wanted to go to MIT, based simply on the idea that it was the best university in the world for sciency stuff. When it came time to apply I gave the matter more serious thought: I concluded that it was not worth paying a lot of money for the higher quality education MIT would provide, since I could indubitably improve my education free of charge by increasing my own effort. Although this reasoning still rings true, I now feel that it completely misses the point of going to MIT: the added value is not (primarily) what the professors are able to teach you, but who the other applicants are. Anybody that gets into MIT is smart, and ambitious enough to devote time to jumping through the hoops the application involves. These are the people you’ll be spending 4+ years of intellectual maturation and youthful initiative with, and some will likely become lifelong friends. So if the personality osmosis described above is real, pre-selecting for brains and ambition might well make a big difference in what kind of person you end up being. Is that worth a hundred thousand dollars? Maybe.


[1] I imagine that those who are close to me might say that I’m incapable of developing ideas without speaking aloud, and I wouldn’t argue with them. Although perhaps this blog will make their burden somewhat lighter.

[2] Another way to influence your influences would be by spending a lot of time reading; reading a book (and thinking about it) is not entirely unlike a conversation with the author, and you have some seriously smart partners to choose from here. The catch is that you need to spend serious amounts of time reading, which lacks certain rewards for a social animal, and that you need to work a bit harder at establishing a real conversation.

Valuing time

I’ve previously mentioned using your hourly wage as a quick and dirty way to evaluate whether something is worth spending your time on. In a serious discussion this would be highly conditional1, but I’ll stand by the assertion that making a minimal effort to put things in perspective can help you lead a fuller life.

Working hard

Yet for those of us who have semi-concrete plans for the future, I don’t think current wages are the best choice (even for a rough measure). My thinking goes like this: Let’s say I intend to get a degree in medicine and become a neurosurgeon. We’ll also assume that this goal is not totally unrealistic, and that there’s a more or less well-known set of challenges I need to overcome to make it a reality. In that case, it’s safe to say that if I don’t fuck around I could become a neurosurgeon within a number of years, which would coincidentally include being paid as a neurosurgeon (~$400,000/year starting out).

From that point on I’ll be earning at this level (or higher) until some age of retirement or death, let’s say 62 (this timeline being true whether or not I become a surgeon). So assuming that I really do want this vocation, I could see my professional career as a phase leading up to this accomplishment, followed by a phase as a surgeon. Since the end point for the latter phase is fixed, every hour added to the first phase costs me (over the course of my career) not what I could be making right now, but the hourly wage as a surgeon (~$200). Would you hit the library instead of watching a movie if someone was paying you three hundred dollars to do so?2

OLYMPUS DIGITAL CAMERA

Of course this doesn’t apply if you’re struggling to fund your basic needs, in which case your horizon needs to be much shorter (a good example of the ephemeral value of a dollar). And it’s mainly relevant when your goals would generate paychecks. But just as importantly, the reasoning only works if we actually dedicate all our free time to whatever our goal is. For this reason the time valuation model presented here might not be immediately useful for most of us, either because we don’t have a clear goal or because we’re unwilling to pursue it wholeheartedly. In the former case these cogitations have nothing to offer, except perhaps the realization that the value of time saved strongly depends on having something you want to spend it on3. But in the latter case, perhaps calculating the value of your time as I’ve suggested might help put the consequences of such an unwillingness into perspective (at least a narrow-mindedly economical perspective).


[1] For one thing, it’s obvious that not every cost/reward can easily be converted into dollars. And in any case the value of a dollar depends on how many of them you already have, as well as the amount of dollars that would be required to achieve all of your goals.

[2] Realistically I doubt we’ll be able to utilize present time to bring our goals closer with perfect efficiency. But even at e.g. 50% efficiency the neurosurgeon example would value your time at $100/hour.

[3] Which might not be totally useless for a culture obsessed with convenience. Do you need a bread machine if you’ll end up watching TV instead of baking? I suppose, if whatever you watch is more valuable than learning to make bread. Do you need to do cardio at the gym if you have a bike and there’s a shower at work? I suppose, if work is more than 5 miles away…

A paradigm of possibilities

When I was a teenager I adopted the basic stance that I wanted to try everything at least once. I quickly realized that this wasn’t really feasible, since some things would probably end up with me dead and thus unable to try more things. But it worked fairly well as a rule of thumb.

Don't die

Then I matured a bit, made a few realizations. That time isn’t a limitless resource. That certain things require opportunity or skills that can’t be instantly conjured; that you’ll never step into the same river twice. While none of this argues against trying new things, it does shift your perspective a bit: If doing one thing can prevent you from doing another, if you can’t try literally everything, then you need to start making choices.

But which choices? Can we recreate the essence of ‘try everything’ within the confines of practical reality? If we only get a certain number of picks from the buffet of experiences, it would stand to reason that we should maximize the novelty of each choice. But does this extend to trying something new over repeating a great experience of the past? For example, do we order our favorite meal at a restaurant, or one that we’ve never tried before? The latter choice will in all likelihood be less enjoyable, taste-wise, but if we ever want to try something better than what we already know we’ll have to take that risk1.

An interesting choice

Such a choice might be controversial, but not very complicated. It gets trickier when we recognize that our pool of options does not remain constant over time. In other words, some of your actions will expand your opportunities (making money, for example) while others limit you (becoming addicted to heroin, say). With this in mind, I would propose the following as a basic stance for experiencing life:

A paradigm of possibilities    Follow whichever course of action seems likely to allow the greatest number of novel experiences henceforth2.

To illustrate this, let’s imagine that a (superior) foreign power invades your country. Do you resist, or surrender? Dying is pretty much the ultimate loss of options, so if the only way to avoid this is surrender then that’s what our paradigm would propose. On the other hand, joining a totalitarian regime (or the Borg) might represent such a loss of options that guerrilla resistance would offer a life richer in opportunities (albeit endangered). This deliberation may seem Machiavellian and/or cowardly compared to say “make a stand for liberty”, but I wonder if it’s really any different than the underlying motivation of this nobler sounding rationale.

Before anybody brings up Barry Schwartz, let me emphasize that these possibilities are not the same thing as choices. They’re not options that we necessarily have to choose between, but rather potentialities that may unfold during your lifetime. Let’s take a concrete example: if I choose to set aside money in a retirement fund, I might end up spending my sixties traveling or writing, or working regardless. More possibilities than if I had no savings, but my default state would be to continue whatever I was doing as long as I was financially above water. Thus the added possibilities would rather manifest as the absence of forced choices than as a choice of actions suddenly forced upon me.

One final point I’d like to make is that while at first glance it would seem that never making choices would allow the most possibilities, this is not the case in practice. Sure, if you never get around to choosing a major you have every field open to you. But on the other hand you won’t have access to jobs that require specialized skills, nor to graduate studies or indeed to any of the intellectual experiences that require understanding of a specific subject. Similarly, if you never marry you may have “access” to every member of the opposite sex, but perhaps not to being a parent (not to mention that your pool of potential partners will shrink over time regardless). Your actions unlock new possibilities, and while you’re postponing a choice others pass you by. Of all the thoughts I’ve shared here, this might also be the most useful regardless of whether you buy into the unconditional commitment to curiosity or not: Remember that life is not a fixed pie of opportunities. The options you have today might not be available tomorrow, and what is available tomorrow depends in large part on your actions today.


[1] Of course not everyone would prefer this ideology, just as not everyone would prefer entrepreneurship over a stable job. But if we ascribe to the idea of ‘try everything’ then the choice seems fairly obvious.

[2] Correspondingly, Paul Graham talks about staying upwind of promising opportunities.