How to understand, externalize and materialize the subconscious of users of social media platforms?
In my world arcCity 2050, society is driven by creative exploration of all sorts. People experiment with new fields and make use of emergent technologies to design, experiment and express themselves without the material needs that limited their freedom to do so.
Some others, retract and experiment with more existential, subconscious and metaphysical things.
A view of arcCity’s MetaDistrict
Inspired by Black Mirror’s two episodes, Nosedive (set in a world where social media status is one’s only way to progress in life) and Be Right Back (where a person is cloned thanks to his digital avatar), I have taken on the task to ask myself:
To analyze one’s response and subconscious self on social media platforms, the following needed to be studied:
Length of exposure on images/videos
Emotional reaction to images/videos
Scrolling loop from account to account - image to image
Similarities between images (color / contrast…)
Increase or decrease volume
Speed of story swapping
I have developed my project that focuses on the first and third point.
Step 1: Object recognition using Yolo + p5.js
Object detection was first tried on RunwayML. In order to be able to edit the code, and save images (of objects detected) I went for Yolo on p5.js. In the link above, be sure to download the file and run it on Brackets if you are not familiar with how local servers work (Brackets has a built in Local server).
Step 2: Adding speed of scrolling on p5.js
This was done with a technique of comparing rows of pixels to each other, the faster the scroll the more different the two rows will be and the higher the score (speed). This does not take into consideration speed of scrolling vertically (because stories work differently than the homepage). Not that I could not use the actual mobile scrolling as input because I was not able to do this in real time, and I had to save a video and then analyze it as a stand alone piece.
Step 3: Smile detection with Python and OpenCV
Smiles can be detected easily on Python and OpenCV. If you don’t have either installed please follow this tutorial. Capturing the smile of the user gave a specific score to images, and these were duplicated in a separate folder.
Step 4: Saving images with scores
Finally the images were saved, the sizes showed the length of duration on these specific images.
Step 5: Processing script
Here I give a set of options to the user. He is able to view a variety of combination of his images that represent abstractly his subconscious. He first enters into immersive mode to view a vague representation of such, and sees how his exposure on instagram during that specific day created this animation. To understand more, the user can click into more precise representations (Redish, Greenish, Blueish, least attention grabbing, most attention grabbing, the ones that made him/her smile).
Check out the results!
“X” - Immersive mode
“0” - The interface and controls
“R” - The Redish images
“G” - The Greenish images
“B” - The Blueish images
“S” - The images you stayed on the least
“L” - The images that grabbed your attention the most
“3” - “4” - The images that made you smile
“The closer you come to knowing that you alone create the world of your experience, the more vital it becomes for you to discover just who is doing the creating.”
― Eric Micha'el Leventhal
Applying Instanalysis patterns to living room through color changing material.
While this short research project is not quite practical yet, the goal is to imagine how can scrolling or social media activity, and a clearer understanding of one’s subconscious be used for a different purpose? Would users become more aware? Will this change their behavior, knowing that they can create different arrangements of images everyday?
Can scrolling become a creative activity?