Mozak

Feedback Wanted: Quest for 10 neurons a day

Hi Folks,
I’m hoping this thread is a longer term brainstorming session about how we can get to a different scale in terms of high quality neuron reconstruction at scale, possibly beyond 10 neurons a day.

The current line of thinking after looking into this for a while, is to figure out ways to maximize the power of highly-skilled community, while at the same time utilize computer assistance in the areas that are less complicated, and thus require less of human ability to troubleshoot problematic areas. To this extent we are thinking of 2 things:

  1. do the obvious stuff by computer agents that deploy the best algorithms, but that also improve through human feedback. This would mean that each new neuron starts with most of the obvious stuff reconstructed, so that people spend their invaluable time on places where computer does badly (which is still the hardest 40% of the neuron approximately).
  2. Auto-correct the consensus by another algorithm that either fixes everything that is obviously wrong, or creates hotspots that need human skilled discrimination. Here are examples of an “fixer-upper” algorithm that we are hoping to release soon. Before:
    image
    After:
    image
    White area is where the algorithms is fairly sure that it is right, orange is the place where it needs help from people.
    Some more examples. Before:

    After:

    Before:

    After:

Are these changes going to help? Maybe. We will try and see what else is preventing us from getting both the volume and quality.

As always, we cannot do this alone, and in many ways you may have a better sense of what we need to get there than we do. So, the primary question we’d love to hear from you about is:

What else should be be looking into/trying to maximize the neuroscience benefit of your amazing reconstruction skills towards 10 or hundreds of neurons rather than a few?

Let us know!

Zoran

Another feature we were looking into is to increase the value of the traces as the neuron challenge ages. So at the beginning the trace is valued 1x as things get harder the point multiplier goes to 2x, and at the end we have a bounty for the hardest part of the neuron with a multiplier close to 10x. I imagine this, just as any change in scoring, could be somewhat controversial, so I’d like to hear your thoughts on this idea, as well as on alternate ideas that will value the really hard “last mile” work that usually makes a difference between an OK reconstruction and a “gold standard” reconstruction.

During our chat this morning we discussed a few things. Feel free to hop in and leave feedback on this post as well as the following:

  • Focusing on obscure areas will help everyone get “the most bang for their buck” since automated algorithms can handle the really obvious stuff. This will increase the number of neurons we can reconstruct drastically.
  • A way we may reinforce this is a score multiplier that provides X2, X3, X4 score as the challenge goes on. Higher points for remaining untraced areas.
  • A “time remaining” statistic would be helpful for each neuron so players can be informed about the life cycle of that challenge.
  • Greater feedback as to how much of a player’s individual trace was/wasn’t accepted into consensus after the challenge closes would be helpful.

What do you think?

2 Likes

Maybe a feature for consensus that allows the user to (after selecting an area to map or help create consensus) agree to the consensus without drawing any neurons, such as they view it, if it is good, they can click said segment and agree to it

1 Like

I like the idea of having an AI (maybe) “prepped” neuron so that all we do is solving the obscure areas. I would go even further: what if AI would propose “different color-path options” in the obscure areas and we act as quality controllers?

1 Like

I like these ideas and see how it will increase quanitity and provide some quality tweaks, especially if consensus can go from 3 players to 5+.
I would especially like more feedback on my perfomance.
I believe I’ve gotten good at following the broad misty trails but am I picking the “best” path? Would it be possible for the team to put out practice segments to help with this? For example, take a scientist-completed neuron with tricky elements and provide it as a self test. When we complete the segment, we can compare to the scientist segment and see how we can improve. I believe this would help with the quality goal.

1 Like

I like the idea of not tracing the parts a computer can do. There’s really no point. There would definitely be less time spent tracing per neuron, but more time hunting and squinting. Something like the H key to make the computer traces temporarily invisible but leave your own visible might make it easier to see the parts we need to work on.

I also like the idea of knowing the timing. On the downside, I can see more pressure to trace questionable bits in order to increase your score, especially as the point value goes up.

Regarding the fixer-upper algorithm: I’d like to see an easy way to erase the questionable orange tracing that the algorithm isn’t sure about. Maybe something like the trash can we have now that would erase an entire orange segment and leave the white, rather than having to erase it dot by dot to redo it.

1 Like

Yes, remember the orange is basically the unchanged yellow consensus that looks wrong to the algorithm, but isn’t completely sure. the method also has a proposed alternative, so one option is to create a hotspot that would let you vote for one of the 3 options: old (orange), new algorithmic option (say it’s white), or both are wrong, let me do it myself.

we are thinking of showing 2 things:

  1. timing (either approximate so you know how old the neuron is, or when we know it’s very close to completion an exact closing time). this will be an improvement to just realizing neuron went away without you knowing
  2. point multiplier factor which starts at 1 and keeps going up to reward the really tough faint stuff at the end that could be worth 6 or 10 times more. we think that this could also help mitigate situations when people feel like they’re arriving late to the neuron party and that all points are already taken.

Cool, I think @tomtri asked for this as well. We need to add a lot more of gold standards but also have a new way to view old challenges, where you can see the final consensus, and maybe gold standard with individualized view where you can see parts of your work that were same as gold standard, and things you missed and things that you traced that gold standard thinks shouldn’t be traced. that way everyone would have direct feedback on their work after the neuron is complete.

I wish AI was that good. but it’s actually very terrible in obscure areas, which is the main reason why people power is so needed. maybe over time it’ll learn what people are finding and improve, but that would take some time.

I understand that one of the targets is to increase the productivity of tracing. Well, one of the challenges I face as a tracer is the amount of false positives [a forest (not Cixin Liu’s dark one) of dots glowing in the light sheet (if LSFM is the technique you are using to generate the 3Ds)]. Getting rid of these dots may improve our ability to trace. Let’s take slice 8 in Mouse-V1-209: all those “bad dots” seem to have similar intensity; therefore, a histogram of intensities would help narrow them down in order to delete them. Another feature to help identify the bad dots would be size/vicinity. These two, intensity and size, could be employed to analyse each slice independently by a program which offers your human operator the power to chose the bin width in the histogram. I realise that it may involve a bit more work behind the scenes. A cleaner challenge may ultimately increase the productivity of tracers.

Interesting. That would be a user initiated blob filter where User would adjust intensity thresholds. Could you post here a few screen capture of examples where you’d want such a tool?

If you want 10 neurons per day, you’ll have to post 10 a day. Personally, I can only handle one every two days or so. Perhaps with a lot of AI pre tracing I could do one a day, but I’m a long way from 10.

We will see how we can ramp up. It surely won’t happen over night, but to be clear I meant 10 neurons collectively, not individually . :slight_smile:

It is quite tricky as the images online maintain transparency. If we could switch on and off the transparency of the slide the choice would become a bit more informed (it would allow an user to judge better is a trace goes through different planes or not).

Here we go: in slice 4 all the dots should be removed as they belong to a plane of only blobs

slice 7 too:
In slice 8 one can start to see, due to transparency, hints of the dendrites (red); the dots marked with blue arrows should still be removed:

in slice 9 the dendrites become more obvious

The choices are:

  1. filter blobs in-house (with a bit of automation), before you upload them for the users
  2. give us a tool to remove them (the tool would be an add-on to CtD with the following features: on-off transparency, produce a histogram or some sort of simple interface which allows to delete pixels based on intensity+size).

(sorry for multiple edits, I can upload only one image and only 3 replies)

one more idea, based on less is more concept: sometimes it would be useful to turn 100% transparent a stack of slices (from the bottom or top of the pile, say first 20 of the 56) and in this way eliminating the interference of dendrites and non-specific blobs. this might be easier to implement that the above mentioned filtering.

Interesting. does than happen often: a single slice with all the noise? if so, then an easier thing to do would be to allow users to crop slices that are not useful. that would be a lot easier to do as well.

A single slice with all the noise no…but rather many slices with a lot of noise. Cropping/making them fully transparent(including the noise) might help.

Could I possibly get access to the full stack of Mouse-V1-209 in jpg format to try and test my rusty imageJ skills on the blob filtering?

Have you tried using these:

  • holding down shift and rolling the mousewheel will limit the width of the volume (z-slices)

  • ALT+mousewheel will move the center of the volume (again along the z-axis).

together they should allow you to clip the volume from either side (though you can’t remove slices in the middle).

we’d have to export them from the volume representation we have. they initially get processed from very large uncompressed tiff files, but we don’t keep those around because that would be quite expensive. i’ll ask the team, and see what’s possible.