Category Archives: Articles

How Might Resonant Memory Work?

So how might resonant memory work? Here’s a highly simplified diagram:

two-neurons-plain

There are two neurons here, creating a loop. How likely is this to happen with millions of nerve cells spreading dendrites all over the cortex?

To the extent that it does happen, it could set up a loop with a specific frequency:

two-neurons-frequency

The frequency (f) of this loop is approximately equal to the nerve impulse velocity (v) divided by the length of the loop (L). This doesn’t take into account the time added by each synapse, but it’s a rough start. Whatever the velocities of each axon and the lag of each synapse, this particular little loop has a cycle time associated with it. If you hit it with an input with a matching cycle time, this loop will resonate:

two-neurons-spikes

The input stream has pulses separated by t, the cycle time of the loop. The first pulse to hit will circle the loop and feed back into the starting neuron, just as the second pulse is coming in. Because the body of the neuron sums the inputs, it will keep sending a strong signal through the loop. The synapses are strengthened when a signal continues to impact them, so this circuit starts to provide stronger and stronger resonant signals as time goes on. We are training this loop to respond to the input.

The output is just the same as the input here, so we haven’t really accomplished much. But we have seen just how easy it is to create a hypothetical neural circuit that can resonate with a given input.

 

The Brain is a Metaphor Machine

There is an impressive amount of metaphor in our thought processes. We tend to store things like spheres, to take a simple example, in proximity. We can think of a big wheel of gouda when we look at the moon, or we can compare the sun to a red rubber ball. If memory is stored by circuits that resonate with certain aspects of the input data, metaphor might be a natural consequence of the storage scheme. Metaphor, in other words, may help us to visualize our own inner workings.

Our very language promotes the resonance theory. This is hardly good evidence, but it is suggestive. When something makes sense to us, we say it resonates. When we remember something, we say it rings a bell. When we communicate well, we say we’re on the same wavelength. Perhaps, on some unconscious level, we recognize our own mechanisms of thought?

 

How Can You Avoid Resonance?

It is hard to understand how you could suppress resonant circuits in the brain. So many cells connect, to each other and back again. As previously mentioned, several types of neurons oscillate all on their own. Pumping a signal into this mass of loops would seem to create resonance all over the place. Are we to believe that wouldn’t be taken advantage of by a frugal mother nature?

The Need for Speed

Humans are required to make snap judgments that still humble the best computers in existence today. The best way to accomplish this might be to utilize the inherent immediacy of resonant circuits. A problem can be recognized and associated thoughts can be retrieved with little or no computation. When recognizing a lion and settling on an escape route, it’s better to have speed than highly accurate computations.

As we’ve seen with the example of the tuning forks, resonance is pretty much immediate. As soon as a fork (or a neural circuit) gets the input, it starts to resonate, or ring. It is a driven harmonic oscillator and all the circuits tuned to the input frequency will ring simultaneously, while the circuits that aren’t so tuned will stay quiet.

The speed with which animals can recognize and act on input is one of the greatest challenges in neural simulations. All of our attempts to throw computing power at the problem are still far too slow to achieve the results of even the lowliest fly. The beauty of resonant memory is that computation is not even required: a circuit can start to resonate within milliseconds, and any correlated information can be retrieved in the same time frame.

Any theory of cognition or memory must take this speed into account, especially given that the fastest neurons are limited to less than 400 km/s, far slower than the speed of electrons in a circuit. Whatever is happening in our squishy brains is faster than the current crop of AI algorithms can handle.

A related speed problem is the converse of recognition: knowing that you don’t recognize something. If I ask you what you know about “tachyon snails”, you will quickly say that you’ve never heard of them. The feeling that you don’t know something is one of the fastest and easiest chores for the mind to accomplish. In contrast, it is the hardest thing for a computer to do. Google has to exhaustively search it’s entire database to find out that it doesn’t know about “tachyon snails” (okay, that’s a weird search, but it’s hard to come up with something that Google doesn’t have on tap). But you and I know instantly that we have no knowledge of such a creature.

Because resonance is immediate, if you don’t get a signal you can be assured that the information doesn’t exist. Simple and fast. Whatever humans do, it is not like a typical computer search. Resonant memory fits this observation well.

Holographic Storage of Memory

Human memory has holographic properties, which can be pretty weird. When you cut an ordinary photograph into quadrants, you get four separate pieces of the photo. When you cut a hologram into quadrants, each piece contains the entire image, just smaller and with less resolution.

Cutting a hologram is weird.

Cutting a hologram is weird.

That’s because the hologram is an interference pattern that is spread out over the whole image. To create one, you aim a laser beam at a half-silvered mirror, which splits it into two beams. You aim one of the beams at the object and aim the other beam at the film. The light that reflects off the object hits the film along with the other beam where they interfere with each other and leave a repeating pattern across the film. Each small spot contains a copy of the entire image, albeit at a low resolution. So you can chop it finer and finer and still see the whole image in each tiny hologram.

Similarly, our memories are spread out over our cerebellar cortex and damage to parts of it doesn’t remove specific features from memory. Studies by Karl Lashley in the 1950s showed that taking out patches of a rat’s cortex from different areas had no effect on the rat’s ability to remember a maze. Wherever the rat was storing its memory of the maze, it didn’t seem to be concentrated in any one area. Rather, the information seemed to be spread over the whole cortex. In the brain, redundancy seems built-in.

So, it seems, memory shares some of the strange features of a hologram, but that doesn’t prove that memory is actually a hologram. There may be other ways to distribute information, and resonant memory is one of them: recruiting neural loops with resonance could easily involve the actions of dozens or hundreds of loops with the requisite frequency.

However, it is intriguing that holographic interference patterns can be computed with Fourier transforms, providing a close mathematical link to ideas of resonant memory. They may in fact have hidden correlations, even if the mechanics end up being different.

Several researchers were puzzled and excited by Lashley’s results. In the 1960s, the neurosurgeon and psychiatrist Karl Pribram was influenced by physicist David Bohm to apply principles of holography to the brain as a computing mechanism. The theory is enticing, but the details remain a little fuzzy. Pribram called his theory holonomics, and it is one of the inspirations for the theory of resonant memory. On some level, both theories may be manifestations of the same underlying mechanism.

Pribram’s theories have been co-opted by new-age spiritualists, who think it may bridge the gap between science and spiritualism. This may have led many serious researchers to discount holonomics. However, the same new-agers have found ways to wring spiritualism out of quantum mechanics as well, and that hasn’t affected its utility in physics. Ultimately, holonomics must be judged by its accuracy, not its acolytes.

We’re not quite done with the strangeness of holography. As we just discussed, when you expose a normal hologram, you interfere the light of a plain laser with light that has scattered off your object. However, there is a holographic trick: instead of using a plain laser, you can use a second object to produce interference. In that case, if you make a hologram of an apple interfering with an orange, you will subsequently see the apple when you look through the hologram at an orange, and vice versa. Thus, you have a level of interconnected data built into the system. Correlating separate memories like this is just built in to the mathematics of holography.

This might mean that the roar of a lion gets cross-referenced with a desire to climb a tree. By linking two ideas together like this, you might guarantee an immediate response that could save your life. Unfortunately his is not a built-in feature of resonant memory theory. Instead, the same kind of cross wiring probably comes from growing nerve cells in the hippocampus that physically stitch memories together. We’ll return to this problem later.

References

Lashley, K. In Search of the Engram in Psychological Mechanisms in Animal Behaviour. Academic Press, New York, 1950.

Pribram, Karl H. “Holonomy and Structure in the Organization of Perception.” Images, Perception, and Knowledge. Springer Netherlands, 1977. 155-185.

Why Suspect Resonance?

Individual neurons can resonate, groups of neurons can resonate and the entire brain can resonate. This has been known for years, but the idea of resonance as memory storage seems not to have been widely discussed. If you have some citations, I would love to see them!

Group of neurons

See any resonant loops here?

Artificial neural networks provide us with wonderful simulations of how actual nerves might work in a connected group, but neural networks are still captives of silicon and use ordinary RAM to store their results.

Neural network theory might seem like a rival to resonant memory, but it is designed more for calculating outputs from inputs. Left unstated is how the memory itself is dealt with. The theory discussed here is how resonance might underpin memory storage and retrieval. However, the theory may have more computing power than traditionally associated with simple memory systems. For instance, it makes sense to connect memories when laying them down. Thus, simple retrieval of a given memory may also fetch associated memories automatically. This adds a layer of computing power to simple data retrieval.

Why even consider resonance for memory systems? There are several reasons that are compelling. I’ll list them here and then discuss them in greater detail in subsequent articles.

  • Human memory has holographic properties, which may be related to resonance.
  • To avoid becoming some creature’s lunch, thinking requires speed.
  • A realistic memory model needs to account for how quickly we know we don’t know.
  • It is hard to understand how you could suppress resonant circuits in the brain.
  • Our thoughts are highly metaphorical, which relates well to resonance.

These are a few of the rough directional musings that stimulate thinking about resonance in animal memory. They are suggestive only, but a good theory should be able to illuminate these issues and hopefully not contradict them.

 

Can Neurons have Resonance?

It’s fun and easy to make a tuning fork resonate, but can we do the same thing with neurons?

There are a couple of tantalizing hints that we can. The inner ear is lined with tiny cilia that are each tuned to a specific frequency. When you hear a simple sine wave, only those cilia that are of the proper length will resonate. That signal is then transmitted to the brain and you can recognize the pitch. If you have perfect pitch, you might even be able to tell me the frequency of the tone.

Frog ear cilia

Fine as frog’s hair: inner ear cilia from a frog.

Most sounds are not single frequencies, though, and when the sound of a trumpet hits your ear, dozens of cilia will resonate, each one picking up a specific component of the sound. Based on the attack, sustain and decay of each of these components, a trained brain will determine that a trumpet made the sound.

When the inner ear does this, it performs what is called a Fourier decomposition of the sound. Fourier was a French scientist and mathematician in the 1700s who discovered that signals could be decomposed into a series of sine waves of varying frequency and amplitude. The richness of a trumpet blast is due to the unique set of frequencies that are produced, based on the shape of the trumpet.

Modern computers perform Fourier analysis all the time, but it is a time-consuming procedure. The magic of the inner ear is that the computation is done immediately, simply via resonance. No calculations required.

However enticing, hair cells in the ear are not neurons. Is there anything comparable in the brain? The answer is a resounding yes. Individual nerve cells can oscillate with different frequencies, as can groups of cells. Large ensembles of neurons can give rise to brain-scale oscillations like alpha waves, among others. The brain is simply buzzing with oscillations. Not unlike a room full of tuning forks.

Welcome to Resonant Memory

This site is a repository for information about a theory of memory that relies on resonant circuits.

First, the requisite metaphor:

Imagine a room full of thousands of tuning forks. They are all mounted by their handles, and the frequency of each fork is printed clearly on each mount.

Tuning Fork

A tuning fork on a resonance box. From brian0918 at en.wikipedia.

I give you a task: find the 440 Hz tuning fork that is somewhere in the room.

If you think like a computer, you will methodically search the room, looking at the printed frequency on each fork, checking for 440. They aren’t in any order, so your search is exhaustive: you can’t stop until you find the right fork. If there are a bunch of 440 forks and you need to find every one of them, then you will have to examine every single fork to be sure you don’t miss any.

But physicists and musicians know a faster way: Bring your own 440 fork and hit it. It will hum out a nice Concert A and as soon as it does, the other 440 forks in the room will immediately resonate with it. You can walk right over to the resonating fork, no searching required. If you have to search for multiple forks, no problem: they are all resonating sympathetically with each other.

If we consider each fork to be a piece of information or a bit of memory, we might find this to be a useful real-world analog for a new kind of computing. The resonance technique has some benefits:

  • The information can be retrieved instantly
  • The information is recalled by presenting an example of the information itself: the information is “content-addressable”.
  • If the information doesn’t exist, you will know it instantly, because none of the existing information will resonate.
  • Multiple pieces of information can be found as fast as a single piece of information.

Due to the above benefits, this style of “resonant memory” could be superior to current address-based memory. Furthermore, it may actually provide a plausible neural code for brains.