I’m hoping this thread is a longer term brainstorming session about how we can get to a different scale in terms of high quality neuron reconstruction at scale, possibly beyond 10 neurons a day.
The current line of thinking after looking into this for a while, is to figure out ways to maximize the power of highly-skilled community, while at the same time utilize computer assistance in the areas that are less complicated, and thus require less of human ability to troubleshoot problematic areas. To this extent we are thinking of 2 things:
- do the obvious stuff by computer agents that deploy the best algorithms, but that also improve through human feedback. This would mean that each new neuron starts with most of the obvious stuff reconstructed, so that people spend their invaluable time on places where computer does badly (which is still the hardest 40% of the neuron approximately).
- Auto-correct the consensus by another algorithm that either fixes everything that is obviously wrong, or creates hotspots that need human skilled discrimination. Here are examples of an “fixer-upper” algorithm that we are hoping to release soon. Before:
White area is where the algorithms is fairly sure that it is right, orange is the place where it needs help from people.
Some more examples. Before:
Are these changes going to help? Maybe. We will try and see what else is preventing us from getting both the volume and quality.
As always, we cannot do this alone, and in many ways you may have a better sense of what we need to get there than we do. So, the primary question we’d love to hear from you about is:
What else should be be looking into/trying to maximize the neuroscience benefit of your amazing reconstruction skills towards 10 or hundreds of neurons rather than a few?
Let us know!