30,100 subscribers a community for 2 years, 6 monthslast post today [+]
Please read our sticky to know whats going on: http://redd.it/3cawxb
Deep Learning is a new field within Machine Learning. In the past 4 years researchers have been training neural networks with a very large number of layers. Algorithms are learning how to classify images to a much greater accuracy than before: you can give them an image of a cat or a dog and they will be able to tell the difference. Traditionally this has been nearly impossible for computers but easy for humans.
Deep Learning algorithms are trained by giving them a huge number of images, and telling them what object is in each image. Once it has seen (e.g.) a hundred types of dog heads 1000 times from a hundred angles, it has been 'trained'. Now you can give it new images and it will spot dog heads within the images, or tell you that there are none at all. It also can say how unsure it is.
It was always hard to tell what the algorithms were 'seeing' or 'thinking' when we gave them new images. So in June 2015 Google Engineers released a method for visualising what the algorithms saw.. Towards the end of June 2015 they released their code, so people could see what the trained neural networks were seeing on any image they wanted.
We created this sub to put these images in. It also is fast becoming the place to discuss techniques/methods and try out totally new ideas, such as video
How do I make my own?
●●● Without programming experience: ●●●
Note that this is popular across the whole internet at the moment, both of these have huge queues (4000 for the second one at time of writing).
Just give this website an image and see what it does!: (possibly NSFW) http://psychic-vr-lab.com/deepdream/
This site might work if the above is down for whatever reason: (possibly NSFW) http://deepdream.pictures/static/#/
●●● With programming experience (python): ●●●
Check the sticky for guides. Its different for each operating system, but you may even wish to run it on a server in the cloud if your Graphics card is not a supported one.
Why are there so many dog heads, Chalices, Japanese-style buildings and eyes being imagined by these neural networks?
Nearly all of these images are being created by 'reading the mind' of neural networks that were trained on the ImageNet dataset. This dataset has lots of different types of images within it, but there happen to be a ton of dogs, chalices, etc...
If you were to train your own neural network with lots of images of hands then you could generate your own deepdream images from this net and see everything be created from hands.
Can this be done on audio? video?
Yes. Check the sticky for some guides.