Thoughts // Patterns – Part 1

Some people might object to the notion that computers can have thoughts. Indeed, some did so in the wake of my previous post. The popular argument went along the lines of “humans can have original thoughts, computers only do what they’re programmed to do”. For the sake of further discussion, let’s examine the premises of this argument.

The first is that humans have original thoughts, which I suppose implies free will. But let’s start with the second premise: that computers don’t have ‘original’ thoughts, only predetermined processing of input. One could respond “go ask someone whose computer just crashed what input they used to make it do that”. But even if we chalk such occurrences up to thermodynamics. it’s not as easy to dismiss some of the seemingly creative products of AI development (like the image shown below). Of course you could argue that this “AI” is still a program, only with enough layers of complexity to produce unpredictable results.

Computer-generated image on the theme 'pagodas'.

Image generated algorithmically from random noise, on the theme ‘pagodas’ (by Google).

Which naturally brings us back to the first premise, the originality of human thoughts. I don’t have anything original to add to the fundamental debate of free will vs. determinism, so let’s limit ourselves to a weaker question: Can we confidently say that our conscious thoughts do not arise from a very complex ‘program’?

One objection from the field of biology is that we haven’t yet learned to read the instructions of this ‘program’ we’re born with, so how could we say what might arise from it? Sure we’ve sequenced the human genome, but until we know the function and regulation of each gene (plus the epigenetics and protein interactions) all we have is a text, without punctuation, in a language where we don’t have much grammar or vocabulary.

Another confounding fact is that a neuroscientist can put you in a scanner and tell you things that you didn’t know were happening in your brain. Heck, a trained poker player can do that. And if we’re only partially aware of our own thought processes, how well-founded can a subjective notion of having original, voluntary thoughts be? It doesn’t seem like such un-thought thoughts are confined to background processing. One can easily point to instances where we act with little guidance from our conscious thoughts, or where we have thoughts which we can’t control (e.g. dreaming, by day or by night). I’m guessing that at some point you’ve done something that you told yourself you wouldn’t. How could that have happened, if not by your brain sending your body signals that didn’t go through conscious processing? Which engine of motivation thus set you into motion? We could of course argue that you simply changed your mind a moment before acting. But, in my case at least, this doesn’t quite ring true; at times, the conscious desire to resist persists right through the offending acts. Passion, or lust, is a good example. There is a certain moment at which, empirically, things become inevitable. It can be felt throughout the body, a bit like the adrenaline rush of a fight-or-flight response, and it’s safe to say that it renders conscious thought irrelevant.

If we act without thoughts, and think without active control, is it possible that our actions are for the most part spurred by something other than cognition? That, like the elements of Aristotelian physics, we simply gravitate towards some desired state and subsequently create the narrative that portrays a conscious decision. I’ve often heard it said that fish don’t feel pain, let alone fear. Nevertheless, they clearly try to avoid undesirable situations (e.g. being caught). We call this “instinct”, but how do we know it’s different from what makes us run away? Perhaps the only unique trait of human intelligence is that we’ve somehow acquired a module for creating narrative, and thus our experience feels special. This would make human thought unique the way a child is; it has traits that make it unlike any other child in the world, but at the same time it’s just one member of the class “children”.

It’s easy to imagine a goldfish unable to understand our thoughts, but does that imply a clear-cut distinction where we have thoughts and it doesn’t? It’s harder for us to turn the argument around and imagine the perspective of a more advanced intelligence, and perhaps that’s just the point. A proverbial alien observer might see us as more intelligent than the goldfish, yet not consider either one capable of real thoughts. This conclusion might offend us, but how could we argue against it if we accept the goldfish as unthinking?


The catalyst for this post was Elie Maksoud, who by the way takes pretty pictures.

Unthinkable Thoughts

‘‘There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think.’’

Richard Wesley Hamming

Something that is simultaneously trivial and fascinating (to me) is that the deeds of some other person my age will be very different, and sometimes far greater, than my own. Trivial because it’s so obviously true, for all of us. But fascinating because this person has had the same period of time and an almost identical human body to work with, so their accomplishments must have stemmed from differences in their environment and/or thoughts they had that I did not. No doubt the former plays a role, but if we confine our analysis to a college classmate I think we can establish a role for the latter as well: Sitting in the same auditoriums, coming from a similar background, this person somehow achieves a different understanding of the subject matter (and the world).

We apply the term “genius” to those who make important realizations that escaped everyone else, and try hard to explain what made these people special. Why didn’t the Theory of Relativity occur to everyone? While such explanations often emphasize a special combination of talent and opportunity, it also appears that simple birth defects and accidents can produce genius-level ability in the otherwise unremarkable. Based on this, one might propose that our brains normally have barriers which block many thoughts from appearing. But what is the system that determines which thoughts we have? And as a natural extension of this, how do we set ourselves up to have the widest/wisest range of thoughts?

Your thought-subset

As so often happens, Paul Graham has an interesting comment on this: he argues that we’re unable to think clearly about things that are part of our identity (e.g. religion, ancestry, preference for Apple products), and so to expand the range of topics you can productively think about you need to minimize your identity. Thought provoking (*cough*), and a seemingly perfect philosophy if you’re a Buddhist inventor. But it does seem more like a surgeon than a full-on savior: even if true, it only tells us how to remove certain specific blocks from our thinking.

Another proposition comes from an instruction that I wish someone had given my undergraduate self. We students were frustrated with having to cram huge curricula in some courses, and often vented about the folly of closed book exams. The important thing was being able to find information on demand, not memorizing tons of facts you might never need, right? Well, kind of… Now that I’ve spent some time thinking for a living, it’s clear that most of our progress comes from connecting dots. That is, coming up with solutions based on multiple pieces of information. Sure, you could easily look up those same pieces of information, but if they’re not already in your head when you encounter the problem you miss out on the solution. Based on this, a big limitation to what we’re able to think would simply be the quality and quantity of dots already in our heads; the more you already know about, the wider range of thoughts you can have.

To me this seems quite in line with empirical evidence, although it’s also obvious that other factors play a role. For instance, there’s the person who knows a lot of facts but somehow can’t venture into uncertain territory. To quote Hamming again:

If you read all the time what other people have done you will think the way they thought. If you want to think new thoughts that are different, then do what a lot of creative people do − get the problem reasonably clear and then refuse to look at any answers until you’ve thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one.

 

So your ability to think clearly plays into it, as does the amount of knowledge you have to draw on. It seems to me that there’s also a huge amount filtering that occurs before thoughts even enter your consciousness. That is, your brain actually processes a multitude of thoughts for every one that you’re aware of, but most of them are discarded almost immediately. Whatever governs this filtering process must have a profound effect on at least our subjective experience on thinking. I don’t know enough neuroscience to say anything really rigorous on this subject, but intuitively it seems possible that the filter is simply synaptic patterns formed by your past experiences. On a side note, maybe that’s where déjà vus come from: subconscious processing leaking slightly into the conscious domain, so that when the thought is presented to consciousness proper it seems to have (indeed has) happened before.

Such a filtering mechanism would certainly constitute a type of biological limit to what thoughts you are able to think. But one could easily imagine more profound limits based on the physiological wiring of our brains. It would naturally follow that different wiring would allow different thoughts. Returning to the original quote from Hamming, I’m sure most would agree that our deterministically programmed computers can’t think human thoughts. But perhaps they (or future versions of them) can think a different type of thoughts, which we in turn aren’t able to.

In other words, the Venn diagram might look like this:

Your thought-subset,  advanced

You might find it difficult to imagine the types of thoughts a computer would have; I certainly do. In order to come up with a decent answer we would need to examine what constitutes a thought, which I’ll leave for another post. But lest we let the barriers in our brains censor the very idea of inhuman thoughts, I’ll end with this reminder from Schopenhauer:

“Every man takes the limits of his own field of vision for the limits of the world.”

Source Criticism – Part 1


Did you notice?

Genius is a will-o’-the-wisp if it lacks a solid foundation of perseverance and fanatical tenacity. This is the most important thing in all of human life.

Thomas Edison

I would like to ask you to take say 30 seconds to think about the above statement. Compare it to your own experiences and convictions, then score it on a 10-point scale according to how strongly you agree with it.

Abraham_Archibald_Anderson_-_Thomas_Alva_Edison_-_Google_Art_Project

Edison, the quintessential American inventor, is often held up as a paragon of industriousness and perseverance. He supposedly tried about a thousand different light bulb designs before finding one that worked, and allegedly slept only four hours per night. He did not, however, make the above statement. Adolf Hitler did. Try reading it again, imagining a pre-war Hitler as the speaker. Do you think you’d have given the same score if you’d had this visual in mind the first time?

Now, it’s not really surprising that we’re swayed by appearances; the halo effect is a well-documented phenomenon. What’s interesting is whether this mental shift in fact happens before our brains start consciously processing information, and creates a bias that the brain actively rationalizes away. That’s how it feels when I think about my own experiences of this sort, and the implication would be that when making such biased choices, we haven’t the faintest notion that we’re doing so. As we all know from the communications-breakdown of a quarrel, irrationality is a lot worse when we think we’re being rational.

This isn’t really (a lack of) source criticism in the traditional sense, but it is in the sense of basing decisions on “knowledge” without asking where it came from. But so what? Does it actually affect our lives if we occasionally do things without paying attention to what prompted the action?1 When we’re doing something where source criticism matters, like searching for information on the internet, we’re all aware . But then again this discussion isn’t about things we realize, but the things we do without even noticing it…

Try thinking of the last article you read online. Doesn’t matter whether it was about avoiding stress, the value of college educations, financial developments or whatever. Stop for a second, and try remembering as much as you can about it. All done with that? OK, Who wrote it? Not “where did you read it?”, but who actually wrote the article? If you’re like me, you won’t remember. You might argue that it doesn’t matter who wrote it since the site is legitimate, and I’ll address this in part 2. But for now let’s just agree that you don’t know or remember who wrote the article. Now, let me ask you this: do you ever, in casual conversation, say something like “I can’t remember exactly where I read this, but…”? I certainly do. And what happens when we do this is that some information from the article is passed on to our friends, but we replace the unidentified website as the source of it.

So what’s the big deal? Of course not everything we say in casual conversation should be taken as gospel. But we evaluate the statements we hear and accept them based on their merits, right? Right. But this is not about the things we do when we’re paying attention, and attentive listening is not the only what that we ‘learn’ things. I would argue that there’s a body of information, let’s call it the collective consensus, that we regularly draw on largely without realizing it. Anything “they say…” falls into this category, as do things that you “just know”. And the collective consensus does affect how we lead our lives. For example: in California, we know that it’s best to limit your gluten intake, and gluten sensitivity is quite common. In consequence, there are a large number of gluten-free products available. In Berlin it is more difficult to find gluten-free scones, and (in my experience) quite rare to hear anyone talking about such. Whether this means that gluten isn’t a problem or that the Germans aren’t aware of it won’t be discussed here, because the point is merely that the information that comprises the collective consensus does in practice shape (each) society.

How does information enter the collective consensus? I won’t even try to give a full answer to that here, but certainly one way is through our casual conversations. Now, the problem isn’t when overtly biased stuff tries to force it’s way into the collective consensus, since that is pretty effectively filtered out. Not the stuff we notice, in other words. But continuing from the example above, there are mechanisms by which information can get in without us noticing it. Here’s an example of how this might happen: My friend’s dad sends her a book on seafood depletion. She reads it but isn’t quite convinced, so she keeps eating seafood and quickly forgets about it. But then, six weeks later she goes: “I read somewhere that  there’s 26 pounds of bycatch for every pound of shrimp, and it’s all just thrown away…” Since I trust my friend, this factoid enters my subconscious as ‘semi-true’, and after that happens a couple of times the force of repetition means that I “just know” that every bowl of shrimp means two gallons of dead fish2.

Notice that in the first step my friend doesn’t read the book because of its credentials, but because her mother suggested that she do so. When she tells me she doesn’t remember the source, and thus assumes that position herself. And by the time it’s entered by subconscious, I couldn’t possibly say where it came from. So in each step the requirement and/or possibility for source criticism is removed3. Why is this problematic? The original information wasn’t necessarily wrong, but the fact that it’s possible for it to enter the collective consensus without critical evaluation, and without anyone noticing it’s infiltration, allows useless or destructive information to get in. An example of this causing trouble is the pursuit of alchemy throughout the 17th century; what might those natural philosophers have achieved if they hadn’t been chasing the Philosopher’s Stone?

So what do we do? There isn’t an obvious solution, as far as I can tell. Since all this is about the things we don’t notice happening, any active defense would be useless unless it worked before the infiltration had a chance to happen. One way to do this would be to compulsively demand a source for every piece of information. Well, solution would pretty much preclude the use of the collective consensus, and make a lot of things go a lot slower.Quite possibly this wouldn’t be worth it for you. But then again, if you happen to be doing something where you can’t afford to be wrong, maybe it would.

 


[1] If nothing else, it certainly affects Israeli prisoners up for parole (And in the spirit of this post, here’s the original paper).

[2] A rather surprising finding from psychological research is that we will apparently believe things that we hear over and over, even when we know the source is not reliable. See here and here. The good news is that this only works when we aren’t really paying close attention. Say, in casual conversation…

[3] If you’re a connoisseur of pickpocketry, you may notice the parallels to a ploy used by groups of thieves: the first lifts the wallet out of your pocket, and let’s it fall to the ground, another picks up the wallet, and passes it to a third who’s walking by. Catching the first in the split-second where he has your wallet is unlikely, and the second isn’t doing anything illegal by picking it up. Nobody’s kept track of it by the time the third person has it, and so the whole group escapes accountability.

Opportunity cost

One Sunday each quarter, the California Academy of Sciences is free to visit.

California Academy of Sciences Rainforest

Or is it?

You don’t have to pay for admission, true. But since the event is both infrequent and well-known you’ll need to wait in line for, say, an hour and a half, even if you show up before they open the doors. How much do you make per hour (after tax)? If more than $20, one could argue that you’re losing money by standing in line instead of just paying $35 for entrance on a less crowded day.

Of course it’s not quite that simple; many people couldn’t simply work 1.5 hours more if they needed extra cash. And you might have been having fun with your friends while waiting. Though on the other hand the crowding doesn’t end once you get inside, so arguably your experience of the museum is inferior as well. But the point is that we ought not to compare expenses of time/money to doing nothing, but rather to what we could have spent it on instead.

The term used to describe this in economics is opportunity cost, as in the value of the opportunity you gave up through a given choice. And the concept is particularly important in situations like the above where the explicit cost of a choice is zero. You may have heard the phrase “There ain’t no such thing as a free lunch” from either Milton Friedman or Robert A. Heinlein, and the point is that there is an opportunity cost to everything.

The clearest example of opportunity cost might be a game show where you give up a sure reward for the chance at a larger reward. The choice to gamble isn’t free, because if you’d declined you would have gained something. And the same principle holds throughout our actual lives: Going to the movies doesn’t cost $12, but $12 plus two hours that you could have spent reading a book or learning web design. Eating dinner at Chipotle doesn’t cost $8, but $8 minus whatever you would have spent to cook at home (assuming you’re not skipping dinner). A free chair on craigslist will still cost you gas to pick up.

Of course it’s possible to get a bit carried away: I’ve heard it said that we shouldn’t ever do something if someone else can do it for less than we make per hour. Sure, specialization is a foundation of modern society, but if you only have a single skill you’d better hope that demand for it never drops. And it would be naive to imagine that we could actually calculate the precise opportunity cost of anything when the “cost” could include anything from hormonal imbalance to the respect of others. Nevertheless, I don’t think the importance of understanding opportunity cost can be overstated if you want to efficiently pursue your life goals, whatever they may be.


Thanks to Johan Larsson for engendering this post with a question about opportunity cost.

 

Opinions

We all feel entitled to certain things: Nobody would pay to breathe. We generally feel that we can do whatever we want in the privacy of our home, and sometimes when we’re on vacation. Depending on where and how we grow up, we gradually add new things to the list. Some would say that we’re all entitled to a job, or even to medical care. And there’s definitely a few people who, whether they admit it or not, feel entitled to a life of beaches, cocktails and sleeping in. If nothing else, in many parts of the world we are constitutionally entitled to life, liberty and the pursuit of happiness.

But are we entitled to hold an opinion?

Most of us would say yes. Charlie Munger would say no, although we can earn the right:

“I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Munger has an impressive intellect, which is probably why he’d venture to make a statement with such strong implications. But it also means that it’s worth taking a minute to consider what he is saying. Is it possible that holding opinions indiscriminately is bad for you?

A discussion of what we’re entitled to could become rather philosophical, but let’s examine what practical implications it can have to hold an opinion on something. In other words, what do our opinions do for us?

To my eyes the main effect seems to be simplifying cognitive processing: If you’ve already filed electronic dance music in your brain’s “bad” folder, you can very quickly deal with anything that falls into that category. If you know that you prefer fish over beef, you save a lot of time in restaurants and supermarkets. So your suite of opinions acts as a paradigm for going through life, giving you a set of assumptions that save time and mental effort.

How does it work out in practice? I don’t smoke because I consider it expensive and dangerous. I don’t know if this opinion would quite stand up to Munger’s criterion. But if someone told me that $7 for a pack of cigarettes isn’t a big deal, I could rely on my understanding of opportunity cost and compound interest to realize that by not smoking I could buy, not one but, two rather nice cars over a ten-year period, or a rather nice house if I wait 40 years1. And if that person told me that their uncle smoked for 40 years and didn’t get cancer, I would know that anecdotal evidence is meaningless for events that are at least partially random and that all we can effectively do in life is skew the averages. So I think it works out in this case. Another example: with no limit on holding opinions, I would probably have said that if you are intelligent you will end up being successful in some field or other. Why do I think so? For one thing I admire intelligent people. And society in general often equates success with intelligence. But if I’m being honest this opinion is not very well-founded at all. I’ve never looked at proportions of intelligent vs. random people being successful in a large dataset, and it’s clear that there are many successful people who are not the most intelligent (even within scientific research). I’m not saying that intelligence doesn’t correlate with success, but if I’m not sure about it what do I gain from holding the opinion? I can’t think of anything offhand, whereas living by this opinion would risk ignoring other factors that contribute to success. Going back to Munger, I would probably be better off by not allowing myself to hold this opinion.

So opinions can help speed up our cognitive processing, but when they aren’t based on serious thinking and research they can lead us to the wrong conclusions. Worse, because of confirmation bias we will invariably pay more attention to things that seem consistent with our opinions (even when the opinion was originally based on nothing) and can thus end up quite irrational. Whereas not holding any opinion that we didn’t develop scrupulously will prompt us to analyze things with a bit less bias.

Why do we love our opinions so much? For one thing, our brains appear to be unexpectedly fond of minimizing effort (for a fascinating treatise on this I heartily recommend Thinking, Fast and Slow by Daniel Kahneman). But perhaps another reason is that opinions help define our sense of self. When we demarcate our likes and dislikes we are effectively constructing our self-image, whether we intend to or not. This may explain why, in addition to holding them, we feel compelled to express our opinions whenever they appear (even remotely) relevant. To repeat our previous analysis, what does this do for us? Is it beneficial to establish and reinforce your self-image? I wouldn’t venture to say, but it does seem to directly stimulate the pleasure centers of the brain. On the other hand, I’m probably not the only one who has offered an opinion during conversation and quickly realized that what I said was for my own benefit rather than a real attempt to understand and relate to the other person. With a bit of misfortune this turns the conversation into ‘parallel monologues’, and frankly I would be reproachable for making this happen. And though we may not be doing ourselves any immediate disservice when we thrust our opinions out into the world wide web, we are still subjecting the world to them. Though it is given to anyone to ignore them, we seem to have a natural inclination to trust what we read. Even Marcus Aurelius had to remind himself that “everything we hear is an opinion, not a fact”. If what we merely express as opinion can become rooted as fact, perhaps it behooves us to give our opinions greater scrutiny?

This is not meant to admonish anyone. After all, I have little legitimate basis on which to judge the opinions of people I do not know. Rather I would present a hypothesis that changed my thinking a bit, and which I find to have some value. At the same time, this ought to set a standard that I wish to hold each of my posts to; if at any point I post something that hasn’t been considered carefully, I sincerely hope that someone will be able to call me out on it.


Thanks to Shane Parrish at Farnam Street (and by extension to Charlie Munger) for reminding me to speak only after much thought, and extending this concept even to holding an opinion.


[1] Average smoker in California goes through a pack per day, so 7 packs/week ~$49, or $25,480 over ten years, with an additional opportunity cost of $13,510 compared to investing the same amount at the historical S&P500 rate of returns of 8%. Total gains after 40 years would be about $750,000.

Last updated by at .