Articles

Understanding ‘Understanding’ – A Guide to Knowledge and Learning

In this article we will attempt to understand ‘understanding’. We’ll see what it means to ‘understand’ something, from both an abstract philosphical perspective, as well as a tangible neurobiological one. Not only will this exploration of understanding and learning be interesting in its own right, but it will even provide hints as to how we can facilitate more effective means of achieving both. What more could you want from an article?

Hopefully you can forgive my forsaking of brevity (the article is about 3000 words), for the benefit of clarity and comprehensiveness. As you’ll see, and perhaps quite obviously: a comprehensive understanding is oh so much better than a shallow one. Give it a chance, and I promise it will be worth your while.

Now without further ado, let’s get into it.

Understanding

What does it mean to understand something?

Ask yourself: do I understand planetary motion? Gravity? Do I understand what atoms are, and how these building blocks can give rise to the brilliant complexity we see around us? Do I understand colour? Do I understand how planes fly? Do I understand my own behavior? Do I understand why 1+1=2?

Some of these you may have assured yourself you understand perfectly well; others may have given you pause: “wait – do I?”

This uncertainty we have when trying to ascertain whether or not we truly do understand something illuminates the question of what, exactly, it means to understand; how much knowledge must you possess before you can claim that you do?

If all of this seems like a meaningless exercise in philosophical nonsense, and you’re quite sure that understanding is perfectly self-explanatory, then let this article serve to challenge that notion, and provide a new perspective.

We’re going tackle the meaning of understanding, and how it ties in with our neurological framework of conceptual knowledge: the networks of neurons in your brain that are somehow capable of acquiring and storing information. And as it turns out, this knowledge of knowledge itself provides clues as to how we can further enhance it – kind of like an epistemological bootstrapping!

But enough of this abstract waffling – let’s try to make these ideas concrete:

Consider the people who existed in a time before Isaac Newton. If they wanted to know why an unsupported object (such as an apple) falls directly to the ground, how could they, without recourse to an explanation involving gravity?

Is it the case then, that nobody actually understood why an apple would fall to the ground when released from a tree?

Yet if you were to ask a random 16th century passerby on the streets of London if he understands why apples drop to the earth when unsupported, he would likely tell you that he does – probably with a condescending and contemptuous reply:

“Apples drop to the earth because all objects drop to the earth when unsupported – this is trivially obvious. Now away with you, benighted peasant.”

And with this somewhat aggressive fictional man’s explanation, we have the first indication of what understanding might entail:

It is the ability to answer a question in terms of something that is already assumed to be true.

The man already possessed within his conceptual framework, knowledge of a ‘law’ or axiom – a fundamental truth – that “all unsupported objects fall to the earth” – which he used to ‘understand’, and answer the question of why an apple drops to the ground.

In other words, he used a general principle, assumed to be true (all unsupported objects fall to the earth), to explain a specific instance (an unsupported apple falls to the earth), so that the specific instance could be understood in terms of the general principle.

(If all of this still seems hopelessly abstract, don’t worry, it should soon become clear).

So, are we prepared to grant that the pre-Newtonian crowd actually understood why apples fell to the ground – as our belligerent London man would undoubtedly attest?

Well, consider asking the man: “why is it then, that all unsupported objects fall to the earth?”

You would either be scornfully dismissed as a buffoon or a lunatic, or perhaps offered some platitude about it being “God’s design”.

But in asking the question, you have struck the bedrock of his understanding; exposed his foundational assumptions about falling apples – assumptions which are themselves not understood in terms of anything else.

Thus the depth of his understanding is sorely limited.

Clearly then, there is no such thing as absolute understanding – but rather, ‘levels’ or ‘degrees’ of understanding.

Enter Newton

With Newton’s theory of gravity, we were in a place to understand the general principle: that “all unsupported objects fall to the earth”, in terms of his theory of gravitational forces: that “any two masses will each exert an attractive force on each other”.

Thus we were able to understand a general principle – previously a fundamental axiom in and of itself – in terms of an even more general principle; a new fundamental axiom.

The enlightened post-Newton crowd possessed a deeper understanding than their pre-Newton counterparts – they were one rung deeper in a hierarchy of understanding.

The figure below illustrates a rough overview of this hierarchy:

This diagram is simply meant to illustrate the process of questioning our fundamental assumptions about the universe – asking why it is that things should be a certain way – so that we can uncover ever deeper axioms of the universe.

One feature of any sufficiently ‘deep’, or fundamental axiom, is that it permits us to explain a host of different phenomena.

For example, even the law that “all unsupported objects fall to the ground” has powerful explanatory power. Not only does it tell us that an unsupported apple will fall to the ground, but it also makes predictions about unsupported bricks, socks, and toasters; namely, that each will fall to the ground.

So, in general, a deeper explanation should be able to provide us with an understanding of more phenomena.

Going a level deeper then, to Newton’s idea of the gravitational force, we can use it to understand not only why objects (such as apples and toasters) fall to the earth when unsupported, but we can also account for phenomena previously thought to be unrelated, like the planetary orbits around the sun, or the moon’s motion around the earth.

Each of these previously disparate phenomena can be understood in terms of precisely the same fundamental assumptions.

In a nutshell, this is the quest of physics: to find the fundamental assumptions of our universe from which all else can ultimately be explained!

Now, it is also important to notice that each of our new axioms about gravity invokes new concepts – like ‘masses’, and ‘space-time’ – which themselves need to be understood in terms of something else.

In fact, for each of these other concepts, we could construct other tiered chains of explanation – with ever deeper understandings made possible using ever more fundamental axioms.

The gestalt picture of each of these branches of knowledge forms what we might call an epistemological tree (a tree of knowledge):

‘Branches’ that sit higher on the tree represent knowledge that is less ‘fundamental’ than on those that sit lower – which is to say, higher branches are ‘emergent properties’ of the lower ones (more about emergent properties in a future article).

In general, if a particular branch rests upon another, then in order to ‘understand’ the concepts on that branch, you must first possess knowledge of the lower branch.

Yet it is perfectly possible to work within any given branch, say, the field of psychology, without first acquiring a comprehensive knowledge of physics and mathematics. You would simply need to learn the ‘axioms’ of psychology, and work within them.

But for a true, deep ‘understanding’, and indeed for the means to excel in the field, it would be requisite to appreciate the underlying biological,  neurological and computational mechanisms that give rise to the psychological phenomena in the first place.

If illustrated with any satisfying degree of completeness (as is not the case with our simplified picture above), the tree would be an unfathomably complex sprawling tangle of interdependent, recursively-defined concepts.

Such a fine-grained network of concepts would compose the entirety of our scientific framework of knowledge.

And hopefully you can see that, while there are undoubtedly hierarchical aspects of knowledge, and understanding, it is primarily and unavoidably a recursive network of concepts, where each concept is defined relatively – in terms of other concepts, which are themselves defined by yet others.

But to speak of the hierarchical flavor of the structure, it is worth noting that we could pick any place on the epistemological tree, say, the concept of DNA, and, following the chain of explanation downwards, we would invariably be funneled to the same fundamental axioms – the wellspring of physical laws from which all else can emerge.

Within the last several decades, physicists have begun to discover that at the heart of their own discipline, seems to rest: not physical substance, like particles, but abstract mathematical relationships. At a certain depth, any proposed theoretical models that rely on more than just the conceptually barren mathematical relationships, are quickly shown to be wrong.

Thus, the core of our universe, whether we like it or not – seems to be raw mathematics – devoid of colour or texture – but replete with possibility (and probability, but that’s a story for another time).

But we’re digressing beyond the scope of the article here, so back to knowledge:

The neurobiology of learning

Interestingly, this visual diagram of branching, interconnected concepts closely mirrors the actual, physical structure of learned concepts within the brain. What does that mean?

Well, let’s take a look at some basic neurobiology, and see how high level conceptual models might be constructed from the relatively innocuous building blocks of single neurons.

Remember, a neuron is the fundamental component of our nervous system – capable of receiving a signal, and then passing that signal along – usually to other neurons.

Of course, there are some specialized neurons that can receive signals not from other neurons, but from the external world – signals like photons of light, or sound waves. And some can send signals not just to other neurons, but, for example, to muscle fibres – causing them to contract, which facilitates motor movement.

In general, a neuron receives an electrical signal via one of its ‘feelers’ – the dendrites, and then transmits that signal down towards its many feet – the axon terminals.

The dendrites (feelers) of one neuron can be connected to the feet of a number of different cells – so that a network of neurons can form this highly interconnected structure.

Skipping over much of the detail, let me make the following claim:

A memory, or a concept is stored in the connections between neurons (or networks of neurons).

Suppose a particular neuron, neuron ‘x’, fires. Further suppose that the neuron has previously established fixed connections to neurons ‘y’ and ‘z’, such that every time neuron x fires, so too will neurons y and z. This is mechanism by which your brain stores memories, and thus the entirety of your conceptual knowledge; the information is stored in the relative connections of neurons.

So with this rather impoverished explanation in place, let’s look at how the brain might store the concept of a house.

Suppose the following grid represents all of your photoreceptor neurons (neurons in your retina which detect light, and thus form the basis of our vision) – arranged such that they represent the incoming image of the outside world.

Now, let’s say that if a particular square (or pixel) of vision is highlighted, this means that the corresponding neuron has detected an incoming photon, and so is turned ‘on’, and fires a signal. So the above image shows that a squiggly ‘S’ shaped object exists in the external world, which is ‘seen’ by the grid of photoreceptors in your eye – causing them to fire in that pattern.

Let’s look at some more examples.

The images below represent three rather simple visual images – in each case, a simple line, but in different orientations.

But now we’re going to look at some neurons sitting deeper in the brain, and how they might be connected to the photoreceptor neurons. Here is a representation of the first image, the top horizontal line, which shows its connection to neurons deeper in the brain (the red dots represent neurons):

Neuron A will fire if and only if the six photoreceptors that form that horizontal line fire simultaneously. This means that neuron A will only fire whenever there is an actual, real horizontal line out there in the world – and thus, in a sense, can be said to be storing the concept of this horizontal line.

The same goes with neurons B and C, which are storing the concept of a diagonal line and a lower horizontal line, respectively, as shown below:

Now what does it mean when neurons A, B and C are all triggered simultaneously?

Well, the 16 photoreceptor neurons that feed into these three neurons must all be activated, like this:

Now, the outputs of neurons A, B and C are connected to neuron D, which will fire if and only if neurons A, B and C are firing. Thus the neuron D can be said to represent the actual visual image of “Z” out there in the real world!

This is the actual mechanism by which visual concepts can be embodied in our brains!

We start with the individual receptor neurons, which each represent an individual ‘pixel’ in the outside world, and then build up in complexity, by having single neurons which can embody the ‘chunked’ sum total of a bunch of these base neurons.

It is not difficult to image how in this manner, we can build up to such complexity that we have a single neuron that fires if an only if there is a house in the real world, which we might call the ‘house neuron’.

Now, while this theoretically implies that our minds do indeed possess single neurons that embody complex concepts, experimental evidence seems to show instead that a concept might be ‘distributed’ among not one, but many different neurons – perhaps as a means of providing redundancy – but this detail doesn’t undermine the general principle.

We should also note that there is nothing unique about the way we encode visual concepts; precisely the same mechanism is used in the brain to store auditory, tactile, and other sensory concepts.

So, we now have some understanding of how our brain can model real things in the outside world. But we have only considered how neurons that embody concepts/images are instantiated by their input (sensory) signals, whereas the real interest comes from their output connections.

Conceptual structure in the brain

We have seen how the visual image of, for example, water, might be represented by neurons in the visual cortex, and we can extrapolate this reasoning to see how the sound of water might also be represented by neurons in the auditory cortex.

So the next thing we should consider, is how the visual concept of water, and the auditory concept of water are themselves connected.

Let’s postulate a network of neurons that collectively embody the concept of water itself.

This network forms connections between the visual image of water (connections to the neurons in the visual cortex that represent water), the sound of water (connections to neurons in the auditory cortex that represent water), and an almost incomprehensible number of other concepts relating to water.

You see, the network of neurons that we have labeled as representing water, attains this distinction solely because of the specific connections it makes to other areas of the brain – like the particular visual and auditory regions.

But, perhaps more interestingly, the ‘water’ neurons are also connected to other networks of neurons which themselves embody a specific concept – like the concept of a ‘boat’, or of ‘blueness’, or ‘beaches’, or ‘seagulls’.

Thus whenever the ‘water’ neurons are active, they will also activate the ‘boat’, ‘blueness’ and ‘beach’ neurons. This is why it is so easy for us to make these associative connections: if someone says “think of water – now what else are you thinking of?” you might name some of these related concepts – because they are physically connected, via actual neurons, in the brain. So when you are told to think of water, your water neurons light up, and cause the boat and beach neurons to light up also.

This network of conceptual connections is what defines water as water. There is no absolute meaning of ‘water’ within your brain – but rather, a relative meaning that arises due to the relative connections of a vast array of other concepts – each of which are themselves defined relative to yet other concepts, and so on.

Better Learning

So then, with these ideas under our belt, we are in a place to see how we might facilitate better learning.

When we investigate our own understanding of something – gravity, for example – we are exploring the conceptual landscape of gravity – the related concepts, and how, exactly they are related.
Now, if your understanding of gravity is very detailed – replete with concepts of ‘masses’ and ‘forces’ and ‘space-time’ and ‘planetary motions’ – then all of these related concepts will be physically connected to each other in your brain, and so you can navigate this conceptual space with ease.

This is the type of understanding you want: the rich interconnection of concepts, such that the physical structure of your brain closely resembles the vast, complex structure of our epistemological tree of knowledge, which we saw earlier.

In fact, there will be a direct mapping between the conceptual structures that we could illustrate on paper, and the conceptual structures of neurons that lie within our brain.

On the other hand, here’s how you don’t want to learn:

Rote learning

Rote learning a topic involves the formation of isolated ‘islands’ of neurons. In this case, you have memorized concepts that are more or less defined in terms of themselves – and so they aren’t nested within this broader conceptual structure in your brain.

Thus, whenever you access some rote-learned concept – you are unable to form connections between it, and other concepts in your brain. It is neurologically isolated; it has not established physical connections with other complex networks of knowledge in your brain, and so your understanding is shallow. What’s more, since there are so few connections between a rote learned topic, and other areas of your brain, it is likely to fall into disuse. When this happens, its connections will begin to atrophy; deteriorating until, eventually, the concept is ‘forgotten’ entirely.

So, we can see that if you wish to enhance your understanding of the world, and better memorize new concepts, you need not only furnish your mind with new conceptual structures – but you must also facilitate connections between these new structures, and existing structures.

To do this, you must understand new concepts in terms of previously understood concepts; form the most detailed picture of a new concept in your mind, such that you establish a plethora of connections between it, and other things you already know.

Incidentally, this alludes to another fact about learning:

The more you know, the easier it is to learn

The more intricate and detailed your conceptual models are, the easier it is to incorporate new knowledge.

An unlearned person will be possessed of a relatively sparse conceptual framework, so when he tries to understand new concepts, there aren’t a whole lot of connections he can establish between them, and existing areas of his brain, and so it will be difficult to learn – almost as though he is rote learning any new topic.

Conversely, if you already possess a comprehensive network of interconnected concepts, it will be much easier to fit a new idea within this structure, because there are so many possible connections with which to accommodate it.

Conclusion

As was illustrated earlier in the article, understanding can be thought of in terms of a hierarchy of axioms – such that a concept is understood in terms of more fundamental axioms.

But we then saw that an understanding of a concept also relies on connections between this concept and others – whether the other concepts are lower in the ‘hierarchy’, or not. That is, concepts are defined (neurobiologically, and conceptually) in terms of other concepts, each of which are themselves defined by other concepts, and so on.

Knowledge is thus a rich, recursive tapestry of interwoven ideas.

Here at Rational Primate, we want to expand and refine our own network of concepts; facilitating new connections, and incorporating new ideas; sculpting and colouring the structure of our knowledge; connecting the various conceptual dots so that we can better understand the world.

And if you’d like to cultivate your own conceptual structure, you should definitely stick around!

7 comments

  1. I really enjoyed that. Thank you for posting it for us to read. Without sounding too skeptical is it possible you may be able to supply references to some of the concepts above? Not just to support your position, but also as an aide to further understanding. I’m guessing that new information would be similar to rote learning in that it may not have previous knowledge to link to? This is new information for me and I have quite a bit of reading ahead of me to build a few new neural connections.

    Like

    1. Thanks for your comment – I’m so glad you enjoyed the article!
      By the way, I encourage skepticism – that’s the lifeblood of truth-seeking.

      Now, I covered quite a few areas so sourcing will be difficult, but I’ll try to go through the main ideas one by one.

      1) Understanding in terms of underlying axioms.

      I can’t think of any good sources for this. I would simply say that you need to convince yourself of whether or not it makes for a good illustration of what’s going on when we understand things. But certainly we get an idea of what’s happening just by taking a step back and observing the gradual progress of our scientific understanding through experiment. We have been uncovering deeper, and more generalizable ‘axioms’ – for example, the discovery of ever more fundamental particles: from atoms, to protons (and neutrons and electrons), to quarks, and now maybe to strings. We can use our model of the deepest theory to describe everything ‘above’ it – so that, for instance, we can describe quarks, protons, atoms, molecules, and therefore all matter, in terms of strings.

      2) The idea of neurons representing more and more complex ‘chunkings’ of visual information – so that a neuron can represent the visual image of ‘Z’ (as shown in the article).

      I think this idea first came from experiments by a couple of neuroscientists – Hubel and Wiesel, which looked at the visual neurons of cats. I couldn’t find a good source in a brief google search just then – but my own learning came from an online course on Coursera (https://www.coursera.org/learn/computational-neuroscience) which is free. You’ll find the kind of stuff I was talking about in week 2: “what is the neural code”. I think they also discuss somewhere in there that there are specific neurons that fire only in response to, for example, a picture of Angelina Jolie, and not in response to any other person. So there’s plenty of evidence to support the ‘Grandmother Cell’ hypothesis (https://en.wikipedia.org/wiki/Grandmother_cell)

      3) The idea of connections between different conceptual regions in the brain (eg water, boats, beach etc)

      Again, I don’t have a concrete source I can point to unfortunately. But the idea is more or less an extension of the fact that certain neurons can encode specific concepts. I actually can’t remember if I was taught explicitly about regions of the brain (networks of neurons) that represent, say, water. But to me it seems like it must be the case, and here’s why:

      If you look at a picture of water, you will automatically dredge up a bunch of associations. Now, the same thing happens when you see the word ‘water’ written on a page. Or when you hear the sounds of water.
      Therefore, each of these different neural areas must be ‘funneling’ into, or activating, the same areas of the brain – and must therefore be connected. So we can postulate that the confluence, or junction, of these sensory starting points is the area in the brain that represents the very concept of water. This area is not only connected to these sensory representations of water (the sound, image etc), but to other related ‘conceptual areas’ of the brain – like the area that represents the concept of boats.
      The fact that these different conceptual areas must be physically connected follows from
      a) the science of how neurons work (sending signals to other neurons that are directly connected via dendrites to axon terminals) and
      b) the fact that thinking of water will inevitably cause you to think of related concepts – boats, as opposed to couches (unless you have some kind of special memory involving water and couches).

      This is also why, when you get lost in thought, say, or even just in the course of a conversation – you usually follow this stream consciousness that jumps from topic to topic, where the latest topic is in some way associated with the previous.

      Now, I’m not suggesting that this is a comprehensive explanation for how our brains work; that they’re always following this kind of constrained track of connected concepts – there are many cognitive phenomena not accounted for by this picture – but I would say that it’s definitely a piece of the puzzle.

      Now, just to make the idea perfectly clear, consider your computer. There will be a bunch of transistors in there which collectively store the information for a particular word document you have saved. That is, if you were to flip the bits of these specific transistors a certain way, you could erase the word document, but leave the function of the computer otherwise unaffected.
      Thus, in a very real sense, this specific region of transistors in your computer are storing the concept of that word document. This is analogous to the idea that specific neurons in the brain can represent a concept.

      But now, here’s something interesting.

      Suppose you identify this region of transistors in your hard drive, and physically remove them. Now, when you hold them up to the light, do they still represent that word document?
      Clearly not – they’re just a bunch of now meaningless switches.

      So the ‘meaning’ comes from their relative position within the overall structure of the computer; it comes from the way that they affect, and are affected by the operation of the computer.

      Same with the neurons in your brain that represent water. They only attain that distinction by virtue of their place within the overall system of your brain.

      Another interesting and related point:

      consider the abstract concept of water itself (divorced of a place inside neurons or transistors). The concept itself acquires meaning only in the context of every other concept that you understand it in terms of. That is, you could not possibly illustrate the concept of water, without also implicitly invoking the concepts of ‘blueness’, and ‘earth’, and ‘objects’ and ‘solids’ and ‘liquids’. Think about it – what is the most parsimonious, least detailed conceptual framework you could construct, that you could use to describe ‘water’?

      This illustrates the idea that there are really no ‘free floating’ concepts – ie, a concept that can exist by itself. Same thing with something abstract, and seemingly simple, like a number.
      The number ‘4’ cannot exist by itself – it requires the whole structure of number theory to come along for the ride!

      I guess the point is this: whether we’re talking about concepts within the structure of the brain or a computer, or within the abstract structure of concepts themselves – they only acquire meaning via their place within a larger context, or framework.

      This has many implications – one of which relates to rote learning vs deep learning.

      To your question about whether new information is like rote learning:
      it depends on the nature of this information. Obviously, it can’t be completely ‘new’ – it must necessarily be connected to a larger conceptual framework.

      For example, say I want to teach you a new word: “teleology”. Here’s the definition: “the explanation of phenomena by the purpose they serve rather than by postulated causes.”

      Now (assuming you don’t already know the word, though you very well might), this is new information – but not completely new, in the sense that you already know all of the concepts that collectively define ‘teleology’ (explanation, purpose, cause etc).

      This new concept is like a ‘chunking’ of a bunch of old concepts, which have been arranged in a particular way.

      Now, rote learning this information might consist of you trying to memorize the definition. The worst way to do this, would be to simply memorize the words themselves. We could still define an area of your brain to be representing the concept of ‘teleology’ – but in the case that you have merely memorized the words – this area will be pretty barren. There will be a connection between the visual area of your brain that represents the words of the definition themselves, and the motor areas of your brain which will allow you to write the words down, or speak them aloud, etc.

      But, unless you really try to understand the concept of teleology, there will be no strong connections between the ‘teleology’ conceptual area, and these other rich conceptual areas of your brain – the areas that represent ‘explanation’, ’cause’, ‘purpose’, etc

      If you can integrate the concept with all of these other concepts, you will have firmly nested it within the entirety of your conceptual framework about the world.
      If you have rote learned the definition of teleology, then it will sit apart from everything else, occupying its own little island of neurons, without much connection to anything.

      So then, in light of this, how would you optimize your learning of the concept?
      Answer: form as many connections as possible:
      Swirl the idea around in your mind, apply it concrete examples: have the visual image of, say, the brightly coloured feathers of a bird, and understand that they are best explained in terms of their teleology – that their purpose is to attract a mate – rather than trying to understand them in terms of the deterministic processes of biochemistry and biology that bring them about.

      Now you have a distinct visual image for the concept of teleology, of a brightly coloured bird.

      But go even further; apply the concept to even more examples in your mind.

      Remember how we said a concept is defined in terms of the overall conceptual context in which it sits? Well now you’ve further defined the concept of teleology in terms of the feathers of birds, and a bunch of other stuff.

      Hopefully it is plain to see how useful this is. The next time someone asks you about teleology, you will have a lot to say. Or when someone asks you why birds have brightly coloured feathers, you can bring up the concept of teleology.
      This applies to the entirety of your learned knowledge – the more concepts you have stored in there, and the more connections you have between concepts, the more you can understand.

      There is no limit to the conceptual interconnectedness you can nurture in your brain. So keep learning, and learn well 🙂

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s