"Mr. M. Frydman, an engineer, remarked on the subject of Grace, 'A salt doll diving into the sea will not be protected by a waterproof coat.' It was a very happy simile and was applauded as such. Maharshi added, 'The body is the waterproof coat.'" -- Talks With Sri Ramana Maharshi
Identity is the artificial flower on the compost heap of time. -- Louis Menand, "Listening to Bourbon"
What are you, at your core? Awareness is a key part, one that we share with many animals, and can feel when looking into a dog's eyes. I am working on creating that feeling of awareness with photos, so that you can look into the eye of the computer screen and feel something responding to you. To do this, Phobrain uses a sort of downloaded brain, in the form of neural nets trained on identified photo associations, and adds variation based on your mouse movement, click rhythms, and a DNA dynamics simulation to give it a (sort of) tin man's heart.
How does it work? You look at two photos, and see if you can figure out what they have in common. Then you see what that pair has in common with the next one, and what story might emerge, like the dreams of a dog scratching an electronic itch.
In the default Browse Mode, just click on one of the photos to search for the next pair. Each photo can have up to eight different personalities assigned to handle clicks on different areas - see if you can see a difference! The theme it chooses is based on your click timing, as if you are throwing a stone into a pond. (If you wave the mouse around or draw on the photo, it will also have an effect on choice.) Clicking on the left-hand photo will use use a group of neural networks selected from a pool of about 250 to pick dynamically from all the possible pairs (about 20M). Clicking on the right-hand photo will choose a pair from the set used to train the neural nets (about 200K), which have been hand-screened for interest. If you want to see what you are missing, there are hidden 'Random choice' buttons just above and outside the left/right top corners of the photos.
Screens. The screens show either portrait-oriented or landscape-oriented pairs. The landscape pairs can be either side-by-side or stacked. Side-by-side landscapes are recommended if a wide screen is available.
Search Mode. In this mode, unscreened pairs are formed dynamically in response to your choices and timing. Several options for choosing the next pair appear below the photos, some depending on whether AI or Golden Angle is chosen:
+ gives color similarity (using one of 10 algorithms). - gives color opposite. c chooses curated pairs. The purple options, Σ1 Σ1 Σ3 Σ4 Σ5 ΣΣ Σ𝓍 , apply various combinations of neural networks. The grey numbered options, 2 3 8 27 32K, apply the Golden Angle in spaces of the numbered dimensions. | chooses a pair completely at random. + chooses a match based on descriptions using keywords. In Search Mode, clicking on a photo results in a match to its neighbor (rather than replacing the pair), depending on whether you click on a corner (4 different color matching algorithms) or the center (keyword matching).
Clicking in the grey area next to the options, to the left of the yellow + or to the right of the green +, will cause any keywords shared by the photos, or color-matching algorithm used to choose them, to appear to the right of the options, for as long as the mouse button is held down. Clicking in the grey area next to a photo toggles it with the one that was there before. Clicking below the photos, in the grey area just above the options, toggles both photos with the non-showing pair. Holding this area down for a second restores the most recent pair.
The keywords used by the + option allow some storytelling, like a psychic crossword, or a therapist analyzing dreams. The common features could be people, things, colors, shapes, textures: whatever we think would make the most obvious and interesting connections as you hold both pictures in your mind. (Again, you can see the keywords that matched by clicking and holding the mouse next to the options.) The keywords for the features are scripted and discussed like characters in Sesame Street, merging points of view to create a hybrid 'brain'. Each photo is like a brain cell, connected to other photos by keywords they have in common. You explore this 'brain' when you use the + option.
Don't worry if you don't see any similarities at first — it's not perfect — but if you keep at it, you will start to see themes that last over a few pictures, then more will start making sense — it's like learning a language you find you already know. You can use another option to change the subject when you get tired of a theme, then return to + if you see something you want to pursue.
Looking at the photo on the left above, we might describe it with the words "woman hand phone face blue". If I click on + for it. I expect to see another picture with at least one of these features, but will blue-ness jump out for me on the next photo? As you go from picture to picture, it is a little like a crossword puzzle, matching up words instead of letters.
Now consider the photo on the right above: it is outdoors not indoors, in public not in private, the background has classic geometry, the real person in it is a boy and not a woman, and the color that jumps out is red instead of blue. On the other hand, there are two males in each picture, and there are representations of people (picture on phone, and statue). Perhaps the most interesting similarity between the two photos is that there is an interaction between a person (or people) and a representation of a person in each. This site can help build up your analytical abilities, although it does not do such a complicated analysis itself, and would be unlikely (we hope) to join these two pictures when the + option is used.
Like when learning a language, you can enjoy the view and watch for patterns to emerge.
Sessions: Each browser creates its own session, which should keep you from seeing any repeats of pictures within a given View.
The dog's eyes: My goal is to make the site smart enough so that it seems alive, like the feeling you get when looking into a dog's eyes. The fading image when you enter the slideshow is a gesture toward that goal. More concretely, a live molecular dynamics simulation is used as a sort of heart: it is affected by clicks on the slideshow page, and in turn affects the next picture you see; and it gives a continuing life to the site.
Can you make it browsable? I don't plan to add any kind of browsability like other excellent sites have. Can we upload pictures? I plan to add the ability to upload photos to the site.
Theory: (This describes the original, single-photo version.) A picture can tell a story that stands on its own and burns itself into your memory. Put two pictures together in sequence, and the 'picture' now exists in your memory as much as in your eye. The story becomes what is common to the pictures, and this competes for your attention with the other details. You may struggle to find a story and give up. My theory is that if you can find a story more often, you will become more engaged. According to a New York Times blog:
Japanese researchers found that dogs who trained a long gaze on their owners had elevated levels of oxytocin, a hormone produced in the brain that is associated with nurturing and attachment, similar to the feel-good feedback that bolsters bonding between parent and child. After receiving those long gazes, the owners' levels of oxytocin increased, too.
A more nuanced story about oxytocin from Wikipedia.
Related media and software
Articles and threads
*Enter*<——— oOo ———>
Listen, a woman with a bulldozer built this house of now
Carving away the mountain, whose name is your childhood home
We were trying to buy it, buy it, buy it, someone was found killed
There all bones, bones, dry bones
Earth water fire and air
Met together in a garden fair
Put in a basket bound with skin
If you answer this riddle
If you answer this riddle, you'll never begin
— Robin Williamson, Koeeoaddi There
In tribute to Lucy Reynolds, teacher of Graham technique and breeder of dogs.