![AGI Working Group: Neuroscience-Inspired AI](https://cdn.stayhappening.com/events5/banners/fb3f3e9fbeb3f6d8adb312c2a601372a1d301b8636e36593039738b73f0943bd-rimg-w1200-h675-dc010000-gmir.jpg?v=1719007256)
About this Event
AGI Working Group: Neuroscience-Inspired AI
Current Transformer-based approaches extract too little from the vast information stream.
So much training data is needed to cope with the weak inference-only use that makes such weak use of the vast datasets.
The human brain, somehow, is able to achieve these functions with a vastly smaller dataset.
How is it able to do that?
Unlike LLMs, the human brain is invested in accumulating and building a model of the world.
The human brain performs its predictions by noticing that a newly-arriving sequence happens to match something that happened in the past. The prediction comes from seeing what the next sequence was in the past.
A happy side effect of this process is learning things.
The edge-detection behavior of retinal neurons leads to a key trait: sparse representations. Neurons in this context only fire when they detect something novel, a change–not more of the same.
When the retina detects an edge, it only saves the outline–not the fill in. This allows for a sparse representation.
Let’s consider the process needed to recognize the word: “P”, “e”, “a”,”c”,”h”, when the letters are presented one at a time.
The first letter seen is the “P”. A limited number of ‘dots’ in the retina would be activated by seeing a “P”.
The square that contains the “P” may have 20 dots and together they represent a “sequence”.
An instant later, the second letter “e” appears and it leads to another sequence of dots that together are a sequence.
The sequence for the “P” is connected (basal dendrite) to the next item in the sequence “e”.
So the sequence of dots are like slots in a one-dimensional array.
The first time these letters are encountered, they are really:
sequence of darkened bits (“P”) → basal dendrite → next sequence (“e”) → basal dendrite → next sequence (“a”) → basal dendrite → next sequence(“c”) → basal dendrite → next sequence(“h”).
So, in humans, as new sequences flow in, we look to see if they are familiar. If so, the brain “cheats” and looks ahead in the similar set of sequences from the past.
Obviously, this is a widely different approach than is used in most of today’s naive “AI” implementations.
In summary, the true AGI we seek will not come from Transformer-based approaches.
A transformer is taking a "word" from the input and comparing it to each of the other words in the input message for contextual similarity. That's a clever trick but it's not the same as human brains do, which is actually understand what an object is.
Homework for anyone attending the next meeting is reading Jeff Hawkins' monumental book "The Thousand Brains Theory".
Event Venue & Nearby Stays
647 Virginia Ave, 647 Virginia Avenue, Indianapolis, United States
USD 0.00