Discussion about this post

User's avatar
Wondering's avatar

In your paper you suggest an experiment for calculating the derivative of a change in neural activity on the performance of a task. I think the idea here is that we can prove that the brain makes the neural change with the largest derivative?

If thats the case, then how do you deal with the massive number of candidate neural activity changes for any particular learning event? Surely you cant test and calculate the derivative for all of them?

Expand full comment
DomPols's avatar

I am very interested in learning rules in general and have done some reading on the matter although I would still consider myself pretty new to the field, I apologize if this comes off as ignorant. Are you aware of any learning rule (or combination of rules) that has all of the following properties?

- Works for time varying input

- Generalizes static input, I read a paper a while ago that classified shapes with a SNN but they were always in the same size and in the same place. I tried to make tuning curves using the BCM rule and it works really well but if there is some random phase offset it completely fails which is totally unrealistic.

- Works in large, complex networks. Most of the papers I have found for SNNs have simple feed forward networks and are pretty shallow. Applying their rules to more complex architecture has always failed for me.

- Weights are semi-stable across time when the stimulus is removed for a given behavior

- One shot learning or learning on small data sizes

- Backed up by physical experimentation. Something like snnTorch is close on most of these but they admit it’s a pretty ad hoc mapping of the success of backpropagation to spiking neuron models. A lot of papers either have good performance on complex tasks or are very biologically realistic rules but I don't think I've seen both.

I cannot stress enough that I am still pretty new and not professionally immersed in this field. This has just been a frustration of mine for a while because the way this was presented in undergrad and some introductory textbooks made it sound like STDP + some homeostatic mechanism are sufficient for complex learning and I just don’t think that’s true because of how many times rules like that have failed me in some simple simulations I’ve done.

Expand full comment
1 more comment...

No posts