Mozak

Visualizing and Interfacing in Virtual Reality

Hi everyone,

I’m Diego and I’m a cs student at MIT.

I just recently discovered this game and I have to say it’s pretty cool. The first thing that came to my mind when I started playing this was “would this be easier to do in VR”?

I’d love to hear your opinions/thoughts on it. If this is something that sounds intriguing, I’d love to work with some people to get a small prototype going.

There is already a precedent for visualising/interfacing with 3d scans in VR implemented by Valve, the gaming company.
https://www.youtube.com/watch?v=-PVAcLlYUpg

Hi Diego,

Thanks for trying out the app. While we don’t have plans right now to try to make this a VR application (there is a large backlog of things we need to fix just as a standalone web app), there is a member of our team who is also interested in and has done some VR development. I’ll chat with him and get back with his thoughts.

Hi Diego,

Apologies such a long delay in responding. In general we are not planning on doing any VR related development although we agree that being able to perform traces using your hands would be much more intuitive and natural than what our current web application offers.

I spoke to one of our team members who has experimented with VR and one of the challenges he mentioned was just being able to display enough of the neuron image in a high resolution. A single full neuron image is several gigabytes large at the highest resolution level, which would not fit in most graphics cards’ memory. Our web app avoids the problem by showing only a small subvolume and reloading a new one each time they scroll to a new area or zoom out. Obviously this would not look good when using a headset since either the trace area would appear very small or very blurry or pixelated if the image was blown up.

Hello! I’ve noticed, that while I’m tracing, I frequently use the rotation feature. It helps me to see the depth of the scene. I think it’s not the problem of input: the ability to use the controllers, its ability to better perceive the depth. Also, VR app could use the same approach of showing only a small volume of data. Look into Google Blocks app, it has the easiest camera controls and a quick way to zoom. Also, drawing lines in 3D volume reminds me of the Tilt Brush app.

Solid observations. We’ve got a lot on our plate in terms of development, currently, but this is definitely something for us to consider should we take the leap to VR. Thanks for the input on input!