Part one
Scenes from the Himalayas, the Baltic Sea and different locations in the Stockholm area shot through Google Earth Pro. Music generated using Amper.

Part two
Screen recordings made using translating and object identification apps iTranslate, Iryss, Google translate and iDetection.

Part three
All images and recordings are from the visual database Cocodataset.org, primarily used in training visual machine learning models.

Part four
Scenes from the Pacific Ocean, Santa Cruz, Detroit, Silicon Valley, and the Warner Estate in Beverly Hills, shot through Google Earth Pro. Music generated using Amper.

Part five
All images were generated using StyleGAN models in Runway ML. The datasets used were composed by me. The image interpretations (titles) were generated using a Visual chatbot and the object identification feature in iDetection.


The script for this story was partly inspired by the manifesto Kino-eye written by film director Dziga Vertov in 1923.

Typefaces in use are Neue Montreal and Druk. This webpage was designed and coded by me, Frida Haggstrom.

Full thesis coming soon.