I bumped into this Video below explaining the process, thought it was some food for thought.
I ran Inmoovs hand through Google's Deep Dream ( one source http://psychic-vr-lab.com )
Original image 1st Iteration
What the heck does it see what I cant see, yikes spooky......
(just noticed the glass angel in the window turns into a hybrid Cat/Cow combo)...
Interestingly it sharpens and also refreshes the content, keeping the image alive:-
2nd Iteration 3rd Iteration
Things to note :-
Each Iteration is regenerating the scene (its sharper, encouragingly without the normal artifacts that a sharpen image filter would do).
The angel is looking a bit more angelic..
There is the starting of a man holding the wrist part and what looks like controlling the wrist joint with head acting as ring/pinky finger flex point.
Thumb dog is getting quite furry now.
Some blue robot characters are appearing bottom left.
One sinister character hides directly behind the wrist and another appears in the gap at base of thumb.
There is a monkey type character developing on the far left of the couch as we look at it..... it frightens to look closer.
There is a lot of speculation in the reading of this particular picture, however bear in mind the Google Deep Dream is biased to find people,eyes,animals and buildings (sinister!!) , so to train it to find other things then these other things must be placed in the deep learning matrix, so us MRL's need more plugs,sockets,connector,walls,tables,chairs,stairs,ball and hand-tool speculation matrix's
N.B.1 :- could it be used for paranormal activity, i.e. sped up to real time, to reveal the ghosts and demons that are sitting right now next to us, the very ones that release the blue smoke and add syntax-error code .
N.B.2 :- analyse "Before and After" disaster/crime scenes and see where the weak spots might show some light on the scene (i.e.911,Chernobyl,Fukushima)