In addition to the Processing-visualisation presented as the final result of our Hackathon project, I developped a new mean of representation for the 2300+ images of the Gugelmann collection.
The same galaxy like formation of images was ported to the game engine Unity3D and is rendered to a pair of virtual reality goggles called Oculus Rift. Navigation is very simple and intuitive – the visitor is able to move forwards and backwards, always heading in the direction he is looking. The flight through the cloud leaves a trail (white line in the above video), aiding in quick orientation and laying open yet undiscovered areas. The data acquired by collecting the trails of many individual users could be further used to train a neural network in order to make recommendations or for establishing new similarity relationships between items.
The video is unfortunately not able to put the experience across to spectators in the same way a live session would. Feel free to contact me if you want to try it!
There is also a browser version for Google Cardboard, see here.