Mind’s Eye Process…

BRIEF: 3. A self-initiated project that critically and imaginatively explores an aspect of media design / network media.


Short Proposal: I intend to create an app based locative audio experience which will attempt to change perspectives and challenge expectations.

Long Proposal: I intend to create an app based locative audio experience which will attempt to change perspectives and challenge expectations. The app will be similar to an audio guide of the city of Bristol, but not told through the voice of a narrator in an educated position of influence. It will instead, be told by members of the public, the people that interact with locations in Bristol the most, often on a daily basis. In my opinion, they are in a better position to tell a history of a place through their own personal experiences and memories, than one person with a fact sheet of a place. I also think it’s a more interesting historical document than a personal, general history of a location.

The locations used in the app will be in Bristol, specifically selected along a designated, predetermined route for the user of the app to walk. You will hear from people who have stories to tell from the past and present. I am interested in the dialogue between a varied cross-section of different people in society, expressing their personal viewpoints, memories, histories and stories of the same location, whether that’s the cleaner of the building, the managing director and owner of the building or the homeless person outside who’s never been inside the building. All having a dialogue with one another, inside the app audio and consequently inside the ears of the user, painting a picture inside their mind, hence naming the locative app ‘Mind’s Eye’.

To collect the audio to use in the creation of the soundscapes for the app, I am considering building a method of digital submission of MP3’s, where participants can record their responses and address a few questions about a location featured in the app along the route, which they may have a personal experience, history with or a story to tell. Then sharing this open call for responses on social media to encourage people with stories to respond. In order to not be at risk of excluding certain unrepresented minorities in society, for example, those who do not have access to social media and the internet, like the homeless. To get this audio, I will personally go out and manually collect audio interviews and stories of my own. This combined with the internet submissions should be more than enough content in order to construct a cohesive audio narrative for the app to be immersive and function successfully.

I will make the app using onsen.io as a framework for the app along with Google’s firebase server service.

MOOD BOARD:

RESEARCH – Sources of Inspiration:
Wings of Desire – Wim Wenders: The 1987 German made film Wings of Desire by Wim Wanders was one of the main sources of inspiration for this project. The film revolves around two angels, Damiel and Cassiel, who glide through the streets of Berlin, observing the bustling population, providing invisible rays of hope to the distressed but never interacting with them. There is a key scene in the film (see below) where the camera pans along a Berlin subway car and you can hear the thoughts of all the passengers on board. I like the idea that you hear all the passengers thoughts and opinions, and there is no hierarchy or preference. You hear what they are thinking, no matter how seemingly meanless or trivial it may be. It made me think of the idea of producing an audio guide but told by the people that experience the space first hand. As they are probably in a better position to tell the history than a historian or ‘expert’. I also thought this more candid form of storytelling would be more interesting for the listener, as they would feel like they were being let into local secrets and inside knowledge.

The Memory Dealer – Rik Lander: The Memory Dealer: Another source of inspiration for this project is The Memory Dealer by Rik Lander, it is an interactive drama that uses a smartphone app, live performers and installations that take place in several places around the center of Bristol, in an attempt to immerse the audience in the story. The Memory Dealer is a form of interactive theatre where you become an integral part of the story, I had a chance to experience this project first hand and I think that is where the real immersion lies, feeling like a fundamental piece of the storyline. Unlike most plays and novels where you are not a rudimentary part of the story, you are just a third party watching the story unfold. In The Memory Dealer, you have to walk to certain places and communicate with characters in the story and carry out tasks, which makes the whole drama much more immersive and captivating because you have a vested interest in the outcome and feel a sense of responsibility.

It Must Have Been Dark by Then – Circumstance: Last year I was fortunate enough to experience this project first hand and meet the creator Duncan Speakman. The project uses an audio soundscape embedded in an app and a book which you read in tandem with roaming the city. It is extremely immersive and very well made the soundscapes and narration are beautiful. It was a big inspiration for this project.

They Live – John Carpenter: I’ve chosen the 1988 cult film They Live by John Carpenter as a source of inspiration because the main character Nada retrieves a box of sunglass that reveal a hidden reality: the media and advertising hide omnipresent subliminal stimuli to obey, consume, reproduce, and conform, thus explaining humankind’s passive attitudes towards progress and obsession with the banal, while many of the elite are actually grotesque aliens who look like animated corpses. I think the sunglasses are a great metaphor for the changing of one’s perspective which is what I am trying to achieve with my app based immersive audio experience.

The Cartographer’s Confession – James Attlee:

Post-Truth Guide – Peter Bennett:

Transgressing Boundaries – Radley Cook, Levi Giles & Will Grant: Transgressing Boundaries is an audio documentary investigating the perspectives of; skateboarders and members of the public about how spaces used for street skateboarding in Bristol impacts the community in a social, political and cultural way. As well as, examining where the boundaries lie between public spaces and skateparks.

ACADEMIC RESEARCH – Sources of Inspiration:
Guy Debord – The society of the spectacle: 
The ‘Situationist International’, was a relatively small Paris-based group of influential, avant-garde artists and intellectuals, best known for their radical political theory and their influence on the May 1968 student rebellion and worker revolts in France. Situationists increasingly applied their critique not only in culture but to all aspects of capitalist society. Guy Debord emerged as the most important figure. Debord applied Marx’s ideas to mass communication, showing how capitalism has penetrated not just what we produce and consume, but how we communicate. The Situationists characterised the whole of modern capitalist society as an “organisation of spectacles” this is clearly a cynical statement, I believe he is suggesting that the ‘spectacle’, as manifested in mass entertainment, news, and advertising, alienates us from ourselves and our desires in order to facilitate the accumulation of capital and is used as a smokescreen, that can be manipulated and leveraged to deceive us. I believe we can harness these elements and use them to captivate and immerse users in digital experiences.

Marshall McCluhen – Understanding Media: Marshall McCluhen is famously quoted in his seminal book Understanding Media, ‘The Medium is the Message’. He is suggesting that the medium affects the society in which it plays a role, mainly by the characteristics of the medium rather than the content. This couldn’t be more relevant to my project, because I don’t believe that the project could exist in any other medium and still have the same effect, it’s not the history of the city as told by the people that is important, it’s the way that it’s delivered. You could transcribe everything that people said and put it in a book, but I don’t believe that it would have the same effect. So despite McCluhen writing in 1964 way before any of the modern technologies I plan on implementing, his sentiments are still very applicable to this day and age.

Charles Baudelaire – The Painter of Modern Life: One could argue that no one’s writings would be more relevant to this project than Baudelaire’s writing of flânerie in his essay, ‘The Painter of Modern Life’. Flânerie is the art of strolling and looking, commonly associated with the shopping arcades of late nineteenth century Paris. He describes the anonymous man on the streets of Paris, drifting through an urban crowd, strolling as a detached observer, part of the crowd yet also removed from it. This is exactly how I would describe being a participant of an immersive audio experience if you are doing it in a built-up metropolitan area. You feel part of the crowd because you are present in the space but removed by what you are hearing, it’s like you are in a haze or a bubble. I think this text and writing around the idea of flânerie, by Walter Benjamin, for example, is extremely relevant to my project. Whilst researching flânerie, I discovered a new term called, ‘cyberflâneur’, which is a version of the word flâneur, for the digital world in which we live in today. Cyberflâneur, is a term for surfing the (Geocities) arcades of the world wide web, with no particular place to go. The idea of the flâneur, for Baudelaire was a man who could “reap aesthetic meaning from the spectacle of the teeming crowds, the visible public of the metropolitan environment of the city of Paris”, for me that is what audio can do for a user, it can inject context and meaning in to the environment around us. I think the ideas around the flâneur are more relevant today than ever before, I think we are all capable of being immersed in a crowd whilst also being detached and removed from it observing the environment around us.

LOGO/BRANDING:

PRESENTATION SLIDES:

WIREFRAMES & UX PROTOTYPING: Below you can see a screenshot of my artboard in Sketch, with a series of interconnected wireframes I used to develop the user experience for the Mind’s Eye app. This was a very helpful process, I asked friends, family and lecturers for feedback based on the interactive wireframes and this helped me develop the user experience for the app which proved very beneficial. You can see the live version of the wireframes and test them out for yourself here.

GRAPHIC VISUALISATIONS: Below you can see a series of graphic visualisation mock-ups of the interface design for each app screen, as you progress through the audio experience. I made this to primarily assist me with the CSS styling inside the Onsen framework and also to have a consistent aesthetic to work towards. The design only consists of 3 colours white, black and orange (#F7A70B). I wanted to keep it simple to match the experience design of the app which is also simple, I also like the challenge of forcing myself to only use 3 colours that are very contrasting. I used the basic wireframes I made in Sketch as a starting point to add detail and colour.

PROJECT CHANGES: It is around this point where there were significant changes to the project idea. I realised that it would be too complex and ambitious to attempt to build the recording functionality into the app as well as positioning the sounds on the map with pins for users to find. The initial idea development was to enable users to record memories and stories about the location at the location. But the more I thought about it, going out myself and collecting the content and authoring it, was actually a better idea and it enabled me to collect a much wider variety of content for different people. Because, if I made it so you could only upload to the app if you have the app, that is creating a prejudice towards those who don’t have smartphones because they can’t afford one if they are homeless for example. I would be imposing a bias towards those who have smartphones, therefore, the submissions would be less diverse, and the point of the project is to hear a cross-section of societies history of urban space. Another element that I wouldn’t have to create is separate user logins, profiles and databases, allowing them to upload their own recordings in the app, would have been totally unachievable in the given timeframe, I would also have to monitor submission and then accept them and push them back to the app.

CODING PROCESS: The coding process was lengthy, it involved making a series of separate prototypes and then combining them together to create the final project. As you can see from the image below each integral element of the project I made as an individual prototype. For example: placing multiple pins on a map, creating custom pins, panning the map based on movement, drawing a geofence on a map, triggering an alert when you enter a geofence, creating multiple geofences, playing audio, randomly playing audio, play and pause button image change, randomly playing an audio file when you walk into a geofence, styling the look of the Google map, plotting a route on the map, creating the onsen navigation, styling the default onsen elements, creating and styling a slide up dialogue box and button styling. Each individual prototype had to be fully working and tested before they were combined together and often combining them together was the most difficult part of the process as incorporating one prototype would break something else like the page navigation system.

LOGO ANIMATION: Below you can see the process involved in making two Mind’s Eye logo animations using Adobe After Effects. I exported them out of After Effects as .mp4’s and then opened the video files in Photoshop and exported them out as endlessly looping gifs for use in the app. The main lengthy process was producing the artwork for each frame of the animation in illustrator, once that was complete the process was quite quick inside after effects.

 

APP TESTING & DEVELOPMENT: Below you can see evidence of user testing at different stages of the project development. This kind of testing, feedback and iterating was integral to the success of the project. I found that people instantly understood the concept of the app and what to do, and the overall feedback was positive, however, some users closed the app to listen to audio soundscape consequently closing the app and cutting off the soundscape, this could be something to explore for version 2.0 of the app development.

AUDIO: Collecting good audio was integral to the success of this project. Before I collected the final audio clips for the app, with members of the general public around the Harbourside area. I wanted to test the idea and see if it would be as effective as I thought it might be. To do this I made a concept prototype (you can listen to below), which consisted of me and some of my friends describing stories or memories from the harbourside area specifically outside the Arnolfini art gallery, the stories were approximately one minute long. I edited the clips I collected together using Adobe Audition, I applied a crossfade between the clips to make it sound like one person was walking away into the distance and the next was walking towards the listener. Once I had this edited version, I put it on my phone to test it in the location. I went down to the harbourside area and played the audio. I think that the concept worked really successfully, it definitely made me look at the space around me differently and really painted a picture in my mind as the story was being described and being in the space they were describing really helped with that. There was one thing however that I didn’t think worked as successfully with the concept prototype, it clear to me listening to the recordings in the space in which they were describing, they were clearly not recorded in the space, they were recorded in a quiet room and not standing outside the Arnolfini whilst they were describing the story. I, therefore, decided it was integral from that point onwards that all the recordings must be captured in the location, so that the listener can hear the natural background hustle and bustle, that they will hear in the space anyway, it also attempts to not take them out of the place they are standing in too much. The idea of Mind’s Eye is to add another layer or perspective to the location that the user is standing in, not take them out or away from that location with the audio.

Below: You can listen to a 20-minute long audio file I created by combining all the audio I collected outside the Arnolfini gallery and around the Harbourside area. I approached everyone who was willing to talk and asked them: ‘What does this area mean to you?’, ‘Do you have any stories or memories about this area?’ and if they have known the area for a long period, ‘Could you describe how the area has changed over time?’. I tried to get a very wide spectrum of participants involved in the project whether they were; buskers, homeless people, ice cream salesmen, old, young, locals or tourists. I like the contrasting opinions, knowledge bases, insight, and depth in the stories that participants told. To me, it doesn’t matter how seemingly interesting the stories are at face value. The important thing is the relevance, that all the stories are tied together by a single location in which they were recorded and that people were describing, that’s what’s important. I was really pleased with the responses I captured, people were much more candid, honest and open to talking about big important subject ideas than I thought they would be. I almost felt that they wanted to be given a platform to share them. I think when you listen to the audio below, you get an accurate picture of the place, in the same way, you would through an audio guide, which is exactly what I intended. The only thing I would like to improve about the audio I collected, would be going back to the location and collecting some more audio on a less sunny day than when I made the recordings to see if the general mood amongst people would be different and consequently a change in responses to questions. 

POTENTIAL ADVERTISING: 

REFLECTION & EVALUATION: I set out to create Mind’s Eye, an app based locative audio experience which attempts to change perspectives and challenge expectations about urban spaces. The app is comparable to an audio guide of a city. However, unlike most audio guides, Mind’s Eye portrays a genuine and comprehensive recent history of a city, told by the people experiencing it first hand. Individuals who may be unfamiliar with the city or have an established history with the area. I believe that everyone’s perspective and opinion is important and should be listened to. I also think that these individuals are in a better position to tell its history, over an academic because they have personal experiences, emotions and memories attached to the place.

I collected stories from people across the city for users to stumble upon. Through experiencing Mind’s Eye, users might discover something new about their city, hear from people with different backgrounds and conflicting perspectives. Users will listen to vastly different stories, varied knowledge and expertise about shared urban spaces, all of which will be unfiltered and unstructured. I believe this makes learning about the modern history of a place fundamentally more interesting. I think that Mind’s Eye is a true historical document of modern day society, as it gives users a real untouched stream of instant raw emotion and perspective on the environment around them.

The Mind’s Eye app went through many iterations these iterations mainly involved simplifying the app and considering the user experience over the user interface. I think initially, the app had too many pages. I learnt that for audio experience apps, in general, you do not want the interface design of the app to confuse, over complicate or distract the user from the audio itself. The audio should be the main focus and the design should facilitate this. As a result, a lot of the app design is in the user experience and the code; the construction of the audio triggered by entering geofences using the user’s geolocation involves a lot of work behind the scenes. I wanted to encourage the user to look up at the environment around them, not down at their smartphone which is difficult and unintuitive when designing an interface, you generally want to do the opposite. I think this method of enabling the user to appreciate the context around them and not encouraging them to interact with anything directly on the app interface, allows them to focus more of their attention on the audio and its relationship with the scene around them. In this day and age, the attention economy is big business, most apps, especially social media are competing for our undivided attention. I think there is a gap in the market for an app that has the opposite approach, an app that still adds another layer to our lives through technology, but isn’t doing this through encouraging us to look down at our screens. I believe this is the reason that podcasts have become so popular in the last few years, they are a relatively old concept, Apple added podcasts to the iTunes store in 2005. I believe we have seen this resurgence in radio, products and long-form audio as a result of people wanting to consume content whilst appreciating the world around them, and not being distracted by their phone screens. The great thing about audio is that it allows you to continue living your life and appreciate your surroundings whilst also learning, consuming and changing your perspectives and challenging your assumptions.

In the final version of my app, the user simply has to walk towards a marker on the map, they are usually arranged along a predetermined route. As they walk towards the map marker, an audio soundscape will automatically play when you enter the area around the marker. The audio soundscape they receive is randomly selected and regularly updated with new audio as stories are collected, so it is likely that each user’s experience from one another. Therefore if you run the app again it is probable that the soundscape you hear may be different. In an earlier wireframe iteration, I thought the user should travel to the pin using the map, then tap the pin once they arrive at the location to trigger the audio. For example, I could have added the formation slide up the panel as a separate page but I felt this is unnecessary. I think a lot of the experience design came out from the wireframing process and using interactive wireframes from within sketch to test the user experience with friends and family and myself at the location with the test audio.

I believe that the core idea of the app is successful. As I said previously, the success of the project was heavily dependent on the quality and variety of interviews I collected from members of the public in urban spaces and the stories and memories they had to share. I think that the audio I collected for the app is a great start, I plan to collect more to increase the apps repository of audio interviews along with their variety to build upon what is already there to make the project more interesting for users and hopefully increase users returning to the app. The app is still very much in its infancy as it is only the first version, but I believe the app has scope to grow, evolve and expand. The cornerstones are there, from building this first version of the app, it is clear that the core functionality of the idea works as an immersive and captivating audio experience. Ideally, I would like the app to work without it being open on the user’s smartphone. I think it would be most effective if when you walked into a geofence it triggered a push notification on the users home screen and users could just play it by unlocking their phone, which would open the app and trigger the audio to play. Instead of actively seeking out the hotspots indicated by pins on the map across the city. I can imagine a time where there are lots of hotspots across all cities, this would increase the likelihood of entering one. Most people walk around cities today with Airpods or headphones in. Therefore they could easily switch from listening to music or a podcast to experiencing temporary immersion in their daily commute to work for example through Mind’s Eye.

On the other hand, however, there are a few improvements and a few small bug fixes to be made to the app. If I were to make a second version of the app, instead of writing the code so that it randomly selects just the one clip at each hotspot. I would write the code so that it randomly selects a few audio clips and plays through those clips. The reason for this being that the clips vary in length quite massively this means that some clips are 15 seconds and some are 5 minutes long. I discovered through testing that it is much better is the audio track gets you from one hotspot to another which works very well, but occasionally you get a short audio clip and you are left to walk the rest of the way to the next marker in silence which is a bit disconcerting. Another example is the pinpoint location button which doesn’t function properly. I would also like to make the pins clickable, with a small popup dialogue that has some information about each location and how many audio clips were captured in the area. Finally, I would like to develop the website and the submission element of the app, to engage further with the community and make it easier for the public to join the conversation. Mind’s Eye is a community focused app and my initial objective was to involve as many people as possible and give them a platform to tell their story, personal history and collaboratively make their own audio city guide of the urban place in which they live. Therefore I want to make it as easy as possible for participants of the app to do this. I believe that this starts by firstly, spreading the word that the app exists and then producing a clean easy to use website with a simple submission system, accompanied by a social media campaign and a physical advertising campaign, by pasting up posters across cities for example, would be a good start in encouraging app adoption and getting the word out.

DEGREE SHOW: For the degree show exhibition, I will produce a short cinematic film demonstrating how the Mind’s Eye app works in a clear visual way. I think this is important because I imagine most of the people at the degree show won’t go outside to test the app straight away. I want to give people an idea of how the app works in its intended environment, I hope the degree show encourages them to download the app, which will be available on both the Apple App Store and the Google Play store for them to download at the show. However, I feel that users get the most out of the app when using it in its intended environment. Therefore, I want to give the viewer an idea of its indented function of use and encourage them to do that. To overcome this hurdle, as well as the promotional and instructional film that I will produce I will also create an iteration of the app especially for use at the degree show, which will be without about the locative elements of the app and just showcase the audio I collected from members of the public. The idea is that visitors to the degree show can put on some headphones and press a button on the iPad to randomly listen to one of the 20 different audio recordings I collected from different Bristol visitors and locals, to give visitors a taste of the kind of audio that is used in the app. Furthermore, if viewers do not want to listen to the audio on the iPad, there will also be display boards behind the iPad stand, with a selection of quotes taken directly from the recordings used in the app which they can read. I believe that all of these elements combined should encourage visitors to download the app and participate in the whole Mind’s Eye audio experience. In addition, to make the ap download more efficient, I will install an NFC tag on the plinth supporting the iPad stand, with a quick link to download the app straight to degree show visitors smartphones.

Click here to download the ‘Mind’s Eye’ app APK!

 

Advertisements

Speaks a Thousand Words Supporting Work…

BRIEF: 2. Making Matter Matter: There is a notion that software is immaterial like a “cloud”, but the reality is that it occupies many physical forms (such as micro notches in a CD or large underground server farms). You are asked to develop a hardware project that creatively explores the materiality of software.


VIDEO: Below you can see a short video montage I produced in response to the activity I was set to explore the what the interrelationship between physical and digital means to me.

RESEARCH – Sources of Inspiration:
Vox – How Snapchat’s filters work: This video by Vox explains Snapchat’s engineering and utilisation of facial recognition technology within their app, to augment selfies through manipulating and overlaying real-time images and animations to users faces. It also documents the acquisition of a Ukrainian startup in 2015 and explores ‘computer vision’ which is the technology involved in analysing the pixel data, in order to identify objects in images, which due to recent developments in smartphones, Snapchat is able to process in real time.

Arran Gregory: One of my primary inspirations for this project is London based artist and sculpture, Arran Gregory. He uses materials such as mirror, wood, glass and concrete, to express a minimalist approach reflective of our contemporary environment. As well as explore and question the idea of the primitive self in the digital age and the balance between society, nature and technology. I am particularly interested in his ‘Dopamine Dance’ installation, the installation features a large black mirror cats head rotating above an army of 3D printed GeoNeko (Gregory’s adaptation of the familiar ManekiNeko or ‘beckoning cat’). The GeoNeko cats are all connected to Instagram and by ‘following’, ‘liking’ or commenting on @geo.neko ‘s account the viewer can trigger them to wave in real life. The GeoNeko cats were created in an attempt to ‘personify’ the internet as a character or mascot. The geometric shape of his sculptures reminds me of three-dimensional realisations of the angular nature of facial recognition software overlays. You can see more of Arran Gregory’s work here. Dopamine Dance, 2019. Insta-active installation, 3D print, motor, raspberry Pi, servo motor.

Mirror Amur Leopard, 2015. 1/20. Fibreglass, mirror, steel. 73 x 34 x 227cm.

Mirror Wolf, 2012. Fibreglass, mirror. 185 x 100 x 85cm.

Antony Gormley: Another source of inspiration is the sculptural work of Antony Gormley, Gormley’s work primarily focuses on the human body. I think the two images below, particularly his 2010 piece ‘Exposure’, which depicts a human figure squatting looking out to sea, almost looks as if it is a physical depiction of facial recognition software but continued throughout the whole body. I don’t know if this was Gormley’s intention, but I definitely see parallels between the two styles. You can see more of Antony Gormley’s work here.Quantum Cloud XV, 2000.

Exposure, 2010. Lelystad, Holland.

Euphrates – Ballet Rotoscope: Research group Euphrates experimented with tracking a ballet dancer’s movements in this Ballet Rotoscope video. Rotoscoping is an old technique used by animators to capture movement. Pictures or video are taken and lines are traced for use in different contexts. I have chosen this video as a source of inspiration for this project because the rotoscoping overlay animation used in the video reminds me of the facial mesh overlays used in facial recognition software.

Matthew Plummer-Fernandez: Another artistic inspiration is Matthew Plummer Fernandez, who is a British/Colombian artist who creates sculptures, software, online interventions, and installations, often in connection, producing and reflecting on contemporary social and computational entanglements and configurations. Fernandez often uses 3D printing as a way of fabricating sculptures, one of his works that particularly interests me is his 2013 sculpture, ‘sekuMoi Mecy 3; Smooth() Operator’ the piece started life as a 3D scan of a Mickey Mouse toy, the scan was then processed and smoothed using a mesh smoothing technique pushed to the extreme, until the scan no longer violated Disney’s intellectual property laws. You can see more of Matthew Plummer Fernandez’s work here.sekuMoi Mecy 3; Smooth() Operator, 2013. 3D Printed (plaster, ink, adhesive) 14 x 9 x 20cm

Venus of Google, 2013. 17 x 9 x 30 cm 3D Printed (plaster, ink, adhesive)

Disarming Corruptor, 2013. Application Software

Sophie Bullock: Another artistic inspiration is Sophie Bullock, a Manchester based visual artist whose work utilises AI to generate physical experiences in public spaces. Bullock is interested in developing an AI which can physically curate both video and visual content. I like the image below that shows the process involved in training the AI software to recognise different body positions to trigger live video, which you can see in action in the AI Charades video below. You can see more of Sophie Bullock’s work here.in_collusion, 2017. AI Software.

Mo Cornelisse: Another artistic inspiration is Mo Cornelisse, who is a contemporary artist based in Holland, often working in ceramics to produce simple sculptures with minimal colour palettes. I like her low-poly style ceramic renderings of toys, which you can see below. I like low-poly objects because they look like they use a varied palette of colours, but it is, in fact, an optional illusion created by the way in which the light hits the object. You can see more of Mo Cornelisse’s work here.Lost Toys Boy & Girl, 2017. Ceramic and Gold Leaf Sculptures.

Kubus Wave Length, 2016. Ceramic Sculpture.

Zach Blas: Another artistic inspiration is artist, filmmaker, and writer, Zach Blas. He is also a Lecturer in the Department of Visual Cultures at Goldsmiths, University of London. His practice spans technical investigation, theoretical research, conceptualism, performance, and science fiction. I specifically like his Face Cages project. Blas, much like myself is interested in biometric diagrams. In this project, he has fabricated biometric diagrams as three-dimensional metal objects, evoking a material resonance with handcuffs, prison bars, and torture devices used during the Medieval period and slavery in the United States. The Face Cages become the irreconcilability of the biometric diagram with the materiality of the human face itself–and the violence that occurs when the two are forced to coincide. Highlighting issues around the failure of biometric machines to recognise non-normative, minoritarian persons, which makes such people vulnerable to discrimination, violence, and criminalization. You can see more of Zach Blas’s work here.Face Cage 2, endurance performance with Elle Mehrmand, 2014.

Face Cage 3 & Face Cage 3 3D render, 2014. 

im here to learn so :)))))), 2017.

Joy Buolamwini: Computer scientist and MIT fellow, Joy Buolamwini’s work explores the ideas around algorithmic bias, which she describes as, ‘The Coded Gaze’. She started a website titled the ‘Algorithmic Justice League’ (AJL), aimed at; highlighting and increasing awareness of algorithmic bias, providing a space to allow users to report experiences with coded bias and developing practices that test technology with diverse users, in addition to increasing accessibility and accountability during the design and development process. In 2018 she produced the ‘Gender Shades’ research project, which evaluated and compared the accuracy of three main AI-powered gender classification products. In order to highlight the need for increased transparency in the performance of any AI products and services that focused on human subjects. Bias in this context was defined as having practical differences in gender classification error rates between groups. The results showed that all facial recognition companies tested performed worst on dark females, and error analysis revealed 93.6% of faces mis-gendered by Microsoft were those of darker subjects. The outcome of the project emphasised that automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices of those who have the power to mould artificial intelligence. Buolamwini’s work is relevant to my work because she too is very cynical about the AI behind facial recognition technology. She had personal experiences with coded bias when facial recognition software failed to recognise her face. As a result, she is presenting the technology community with a legitimate solution to solving the problem through spreading awareness within the AJL collective, in an attempt to ultimately reduce coded bias in our technologies.

ACADEMIC RESEARCH – Sources of Inspiration:
Oscar Schwartz – Don’t look now: why you should be worried about machines reading your emotions: Oscar Schwartz’s article explores the business of facial recognition software and how it’s implemented and used by governments and in surveillance systems and by national security agencies to detect potential terrorists. One strand of facial recognition technology is emotion detection technology, which requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyse and interpret the emotional content of those facial features. This article is very relevant to the subject area I intend to explore. It gives many real-world examples of how the relationship between computer vision and human emotion is currently being explored, trained and implemented in certain areas of society driven by companies like, Kaliouby, who are mentioned in the article. You can read the full article here.

Hito Steyerl – Too Much World: Is the Internet Dead?: Steyerl’s article titled ‘Too Much World: Is the Internet Dead?’, offers a rationalistic and cynical perspective on the post-internet age. She discusses how the world is imbued with the shrapnel of former images and how we can no longer trust what we see or believe to be true because all media is photoshopped and post produced. Steryl uses playful and surreal language to make her points. For example, “Today’s workplace could turn out to be a rogue algorithm commandeering your hard drive, eyeballs, and dreams. And tomorrow you might have to disco all the way to insanity”. She is cynical of our reliance on the internet and modern interconnected technologies. This article relates to my project because Steyerl discusses the ideas around a 3D dissemination of the digital this is precisely what I intend to do through producing my project. Read the full journal article here.

Rob Kitchin & Martin Dodge – Code/Space: Software and Everyday Life: In Rob Kitchin & Martin Dodge’s book, ‘Code/Space: Software and Everyday Life’. They discuss the ideas around the increasing prevalence of coded environments in society such as; airports, train stations and shopping malls. They define a ‘Code Space’, as space where computation is a crucial component, to the point where an environment or experience ceases to function in the absence of code. They reference academics and writers in the field, which reiterate their argument that the significant flaw in these types of coded environment is twofold. Firstly, technology commonly fails and without a rigid structure to fall back on this can be catastrophic in public spaces. Secondly, producing software for code spaces relies on people writing the code, which is a complex and congruent process. It has these qualities because the code is produced by people who vary in abilities and worldviews and are situated in social, political and economic contexts. As a result, these Code Spaces can be prone to bias towards certain underrepresented minority groups in society. This book is relevant to the subject area of my project because like myself, Kitchin and Dodge are also very cynical of the ability of software to integrate within human life, despite continued efforts to train algorithms to seamlessly do so.


Short Proposal: I intend to create an interactive artwork, comprising of a 3D printed sculpture and an augmented reality app. To explore whether computer vision can recognise human emotion.

Long Proposal: I intend to create an interactive and immersive artwork, comprising of a three dimensional printed sculpture of the physical manifestation of a facial recognition tracking overlay produced by facial recognition software. The same technology used by ‘Snapchat’ and ‘Instagram’ to apply filters to users faces and augment and digitally manipulate their facial features. The immersive experience will also feature an augmented reality app, which will allow users to interact with the 3D printed sculpture, to reverse engineer the core functionality of facial recognition tracking software. Through overlaying a three-dimensional moving render of a human face on top of the sculpture of the wireframe mesh overlay generated by the facial recognition software.

Facial recognition software works by identifying areas of contrast between light and dark parts of the face using the ‘Viola-Jones Algorithm’, to locate facial features and produce an active shape model. An active shape model is a statistical model of a face shape that has been trained by people manually marking the borders of facial features of hundreds, sometimes thousands of sample images. However, there has been contention around the lack of the diversity of the people in the sample images. Which largely goes unnoticed due to the lack of diversity in many tech companies which are primarily based on the west coast of America.

To further investigate whether computer vision can recognise human emotion, the artwork will be accompanied by a series of screen prints, that I intend to produce exploring the relationship between computer vision and human emotion. I will examine the seven primary emotions; happy, sad, fear, anger, surprise, disgust and contempt, as seen by computer vision. Through isolating the mesh overlays of facial recognition tracking software for each individual emotion and presenting them to the viewer to decipher. To fundamentally demonstrate to viewers that if, through analysing the same points as the computer, humans cannot identify the emotion, how are we as a society supposed to train computers to recognise them?

NOTES & SKETCHES: Below you can see some initial notes and sketches I have made to document my thoughts and ideas around the concept of facial recognition.

FACIAL RECOGNITION SOFTWARE EXPERIMENTATION:

A screenshot taken a took of Snapchat’s facial recognition mask.

PRINTS: I am really interested in the paradigm between computers and human emotions. I want to highlight this by isolating the facial recognition masks that are produced over the face in facial recognition software. I want to do this for the seven primary emotions; Happy, Sad, Fear, Anger, Suprise, Disgust and Contempt. I then want to leave the viewer with the remaining web of dots and lines that map out the human face in recognition software. I want to produce a series of prints, a print for each of the seven emotions, I will encourage the viewer to attempt to identify each emotion from the mask. The series will be a commentary about computers inability and incompetence to identify and understand human emotions, despite their proficient ability to completely track and augment our faces.

 

CODING: Below you can see evidence of me editing the code for the face tracking software that I found on GitHub. The reason for doing this was to allow me to isolate and export the face overlays the software was generating and applying over images I uploaded. The software was using HTML 5 canvases to apply the candide shape overlays on top of images, I adapted the code to add a download button to target the content inside the HTML canvas and allow me to download that content as a jpeg. I could then trace those jpegs in illustrator to produce a series of screen prints. The vector file would also enable me to use the laser cutter to make prototypes and maquettes to develop the design for my final 3D printed sculpture.

SCREEN PRINTING:

3D MODELING:

 


LOGO/BRANDING:

LASER CUTTING:

3D PRINTING DESIGN & DEVLOPMENT:

UNITY AR INTEGRATION DEVELOPMENT:

TESTING NEOPIXEL INTEGRATION POTENTIAL:

PROTOTYPE TESTING & DEVELOPMENT:

FINAL PROTOTYPE:

Click here to download the ‘Speaks A Thousand Words App’ APK!

 

Computer Virus Interactive Data Visualisation…

BRIEF: You are required to produce a group-based project/artifact. It’s almost as simple as that. You have full scope to work in any way you prefer within the broad arena of Creative, Media, Design.

There are no restrictions on what you use to create this project or what it might encompass but you will need to show your process, workings and reflection in your research log / blog.

You should aim to make an ambitious project, but of appropriate scope for a group project, that builds on the knowledge, practice and experience that you have gained over the last two and half years. It may well extend ideas and research undertaken on other modules and projects.

You will need to show all your process and workings through the following two assignments, so the focus for this one is to make the project. You will all receive the same grade for this assignment unless it becomes evident that there is a mismatch of individual endeavour.

NOTES: Below you can see some initial notes that I made about our initial idea.

[RESEARCH] Sources of Inspiration:
Pixel Avenue:

Comparison of Terms and Conditions Lengths:

Ballet Rotoscope:

Plague Inc:

Do Not Track:

Google – What is malware?: We thought that this promotional video by Google about the dangers of malware epitomises our project idea. This video is not suppose to be funny nor is it intentionally trying to be ironic. However, we thought it was both, because, we see Google and its services as a form of Malware, as there are many similarities between both the features in Malware and Google; in its intrusive data centric

Malware, Viruses and Log Visualisation- Iain Swanson:

Everest (Katie Rose Pipkin) – Mirror Lake: This 2015 project 
here

Disco Dish –  Lynsey Calder & Sara Robertson:

TREATMENT DOCUMENT: Below you can see the treatment document that we put together to pitch our project idea to the rest of the class. For the design of the treatment document, we adopted the NHS’s branding guidelines for their own in-house documents, our treatment is specifically based off a 2016/17 annual report document that I found online. We used their font (Frutiger), and the exact colour blue used in their logo (#0069A6). The aim of this was, from the off, to communicate the theme of our project visually, without having to explain our intentions of producing a project inspired by and using the metaphor of viruses. It made sense to us to make a treatment document for the project, that had a medical feel to it. I think this clinical aesthetic is something we will continue when we start thinking about designing the graphical interface of the final project because the feedback we received about the NHS theme of our treatment document was positive. It did, however, make people feel slightly uneasy and concerned because people are used to reading about the symptoms and treatments for injuries, infections and illnesses in the style of the document that we adopted. I personally think this is quite apt because we are intentionally trying to worry people, to get them to consider and question the security of their personal data when using certain technology platforms, and then make the comparison between the technology platforms and computer viruses. We feel there is an irony in the sheer panic when someone’s machine gets infected with malware, they are usually very quick to remove it. However, they are happy to continue using platforms like Google, Facebook and Microsoft which we believe are doing many of the same things as the malicious malware they are so quick to remove.

FURTHER RESEARCH: Onavo Protect used by Facebook to monetize usage habits within a privacy-focused environment. They did not disclose that Facebook owned the spyware.

The 2010 WebcamGate case was an issue involving a school. 66,000 images were taken of students in their bedrooms. This was done using spyware software installed on their school provided laptops.Above: The table shows our comparison of Facebook, Google and Microsoft to Spyware, to highlight the similarities and crossover between a computer virus and the three platforms.

Below: Is an updated version of the comparison table above, we updated the table to make it more readable and understandable for both us and others. As well as changing the basic layout and construction of the comparison table, we also researched specific viruses to compare with our chosen technology companies. Unlike the method, we adopted in the previous iteration of our comparison table. Where we compared our chosen technology companies to the list of spyware features, which is much less specific and more general than the comparison between the specific features of certain computer viruses, that we included in the updated comparison table below.
MOOD BOARD:

STORYBOARDING: We thought a lot about the storyboarding process and did a large amount of sketching because the setup to the interactive data visualisation experience was really important to the success of the project. Another integral element to the project was that the user understood the virus metaphor that we had chosen for the project, without the user fully understanding the metaphor it would not be very effective.

WIREFRAMES: Below are a series of rendered wireframes I created using Sketch, based on the initial sketches above

INTERACTIVE WIREFRAME PROTOTYPE: here.

USER EXPERIENCE TESTING: Below shows an image of our user testing process. We tested the interactive wireframe prototype on numerous participants. We didn’t tell them anything about the project before we had them interact with our wireframes, we then interviewed them to gain an insight into whether or not they understood our project, what that liked and disliked about the user experience process and importantly whether they understood the metaphor/argument we were trying to make. I think this user testing and feedback we received was invaluable it was really beneficial to see how people who knew nothing about the project interacted with it and we found their feedback very beneficial and we are definitely making some changes to quite a few aspects of the project, design nd user experience process design. interaction design

VIRUS QUARANTINE SOFTWARE RESEARCH: Another avenue of beneficial research that was suggested to us as a way of gaining an insight into user experience design, was to explore the interface and interaction/user experience design of virus quarantine software from a bare bones, wireframe point of view. We selected three virus quarantine softwares; ‘Windows Defender’, ‘Malwarebytes’ and ‘Sophos’ Antivirus Software. I then traced the wireframe structures for each of the softwares interface. I believe this was a very valuable exercise for our project, because I feel we gained an understanding of how to transport the user through a process, because the virus quarantines softwares have a clear and consise way of transporting the user through the steps of a process of; identifying, targeting queartineing removing/cleaning and completing, similar to a story, it has a beginning middle and end. Which is what i feel our project is lacking at this point in time a level of immersion nd story telling to really capture the users attention and really make them care about the story and metaphor and argument we are telling them through our interactive data visualisation.

UPDATED WIREFRAMES & USER EXPERIENCE:

 

UPDATED INTERACTIVE WIREFRAME PROTOTYPE:

here

FURTHER USER EXPERIENCE TESTING:

REFINING WIREFRAMES & USER EXPERIENCE BASED ON TESTING FEEDBACK:

FINAL INTERACTIVE WIREFRAME PROTOTYPE:

 

 

INTERFACE GRAPHIC VISUALISATIONS:

DATA COLLECTION:

PROTOTYPING PROCESS:

Click here to view live and interactive version

CODING PROCESS:

 

DESIGNING THE VIRUSES:

Above: Inspiration Luke Jerram’s Glass Microbiology sculpture series.

FINAL WEBSITE:

Click here to experience IDQ

Overall I am very pleased with the outcome of this project. I  believe we have produced and executed each integral aspect of the core functionality that we set out to produce for the project. I think that we were ambitious with this project, there are lots of different parts all working together, and I think each of the group members all felt out of their comfort zones at times and learnt a lot of new things during its creation. Some of these elements included: Parsing users information from a form to customise their experience, specifically what Petri dishes they see, to make their interaction with our website more personalised and unique. We implemented Matter.js, a JavaScript physics engine on the Petri dishes page so that users can interact and play with the viruses in a fun and playful way, inside the four Petri dishes. We added a javascript timeline, that is fully interactive and has popups, with a brief summary describing any notable events that occurred in that year group, for each of the three technology companies and malware. We designed a virus for each feature of malware, or ‘symptom’ as we are calling them in our project, they were 18 in total. We then individually animated all 18 of the ‘symptoms’ in after effects and exported them out as gifs to place inside the petri dish. I think we have successfully managed to create a clinical and medical aesthetic and feel to the overall project, through the interface and experience design. I heavily took inspiration from the NHS with the overall branding and interface design of the project. Including every detail, down to the border width, hover colour, fonts, favicon, page naming and buttons used, all are identical to those used on the real NHS website. I think this is successful in immersing, slightly confusing the user and making them feel disorientated by our decision to adopt their branding guidelines. Hopefully, the user is asking the questions: Why have they done this? and What argument are they trying to make through doing this?

I think the overall litmus test for the success of our project is, what I mentioned earlier in my blog. How effectively did we communicate our argument about the similarities between the three technology platforms: Facebook, Google and Microsoft, with Malicious malware? We knew there were similarities through our research, and we hoped this interactive data visualisation would be successful in portraying that data. I honestly believe the argument has been successfully made through our project because, firstly, you experience first hand the negative consequences of these features, as a user of our experience. If you click on the Cortana virus, for example, you can see and read about its symptoms, which include: Webcam and Microphone access, this triggers a request to access both of these things through your browser. If you accept, you will see and hear and experience physically what Cortana is able to access. I believe this is quite a powerful and impactful way of experiencing these symptoms and putting across our argument. I often think that in some cases the genral public struggle understands or quite grasp in real terms the consequences of using some of the services offered by the technology companies featjures in our project. So, forcing the user to experience it first hand, when they select a virus, is an instantaneous and clear message about the amount of control, acces and privileges these services have. Which is important because these companies do not tend to be very open with consumers, about the privileges they have access to. This means when the user visits our malware Petri dish page and they see the same features, maybe less intrusive and shocking features than some of the technology companies Petri dish pages. They will trigger the same pop-ups, this should hopefully send home the argument we are trying to present to the user, that there really is no difference between malware and these invasive technology platforms and the services in which they offer us.

I think we have learnt a lot from producing this project, first thing would definitely be despite our ambitious ideas and trying to do lots of different things in one project with lots of different working elements, especially frameworks that we had to learn from scratch and had never used before. The high level of expectation on this project was maybe over ambitious, given the timeframe, as we haven’t got the project finished to quite the high level I think we all hoped it would be. This was mainly because the project was incredibly code based, using some frameworks like Matter.js, which no one in the group had used before. It is difficult to manage time and know how long do certain aspects to the project is going to take, when you haven’t used a framework before. Another take home from the project would definitely be, prioritising tasks better, I think if we prioritised the most important jobs first and not worried about the smaller things as much this could have benefitted our time management and you maybe would see a more complete product. For example, we didn’t have time to put the contents in the ‘Virus Key’ despite producing the slide out window, I think this is a shame because the ‘Virus Key’ is an integral element to the understanding of the project. I personally spent a lot of time designing the individual virus icons and Theo spent a long time individually animating them. The issue is when they are included in the big Petri dishes to represent symptoms inside products and services. It is very difficult to see and appreciate them, I think the ‘Virus Key’ would have been a nice area to showcase their’ individual and thoughtful design, which due to their small scale and overlapping in the rest of the project, in its current state, this can’t really be achieved.

Another thing we didn’t have time to implement was the functionality of the timeline slider. It triggers pop-ups about the corresponding year, however, our original intention was that the data-driven content on the Petri dishes page or individual Petri dish webpages would change depending on the year in which it was portrayed, as the data would change. We had intentions to do javascript trigged CSS animations as you click along the timeline through the years the Petri dish and its contents would change its scale in order to accurately represent the data, we just didn’t have time to implement this functionality.

The final point would be the accuracy of our data, finding data was a tough and long process, hence it being all the more annoying that we didn’t use it all in our project. We would love to be able to stand behind the accuracy of the data used in our project 100%. However, some of the numbers are very difficult to find and involve a private investigator or detectives level of determination to find some of the numbers, as some of it is just not public knowledge. A lot of the data we collected, which we included as a data sheet a the bottom of our about page, is verifiable and direct from the source. Although, some of the data was found in online articles and its validity is not necessarily verifiable. However, I think we all learnt a valuable lesson about the importance and time it takes to collect good reliable data, for a project like ours. At the end of the day, our project relies on it to function, our graphics and animations are just a vessel to portray the data and the true meaning comes from that, so ultimately if our data is not accurate the project its self doesn’t have as much legitimacy.

However, despite all of this, I think overall in the given timeframe we have produced an ambitious, complex and interesting project. That successfully makes an intriguing argument that the user may never have contemplated before, and that really pushed all group members out of our comfort zones. I think if we had more time or were maybe less ambitious with what we could achieve in the time frame, we could have presented a more finished product. However, I believe it is better to present a minimum viable product that’s full of potential, as opposed to a completely finished product that lacks risk, ambition and doesn’t challenge us in unexplored areas. It would also be nice to optimise the project for mobile devices.

Click here to experience IDQ

 

Finished Extension

 

[REVISED] Short Proposal: The Facebook Filter browser extension is an interactive artwork, which shows you a parallel version of your Facebook feed. In an attempt to highlight Facebook’s business model, not their mission statement to “bring the world closer together”. You see less content and more advertisements when facebook’s losing money (when their stock market value is decreasing).

[REVISED] Long Proposal: The Facebook Filter browser extension for Google Chrome is an interactive artwork, which when installed and running in a web browser, presents the user with a redacted version of their existing Facebook feed. It distorts the images on their Facebook feed through pixelation, the severity of the pixelation is determined by a live data feed of Facebook’s current percentage change in stock market value. The aim of the Facebook Filter was to accentuate and exemplify to the user of the filter, Facebook’s business interests through literally offering up a more unpleasant experience when Facebook’s shares depreciate in value. Depending on the level of the drop in the price, the Facebook Filter will pixelate content accordingly. If Facebook’s stock is flatlining or dropping slightly, there will be a slight decrease in the level of usability. However, if there is a major drop in Facebook stock like they saw recently during the ‘Cambridge Analytica’ data breach scandal, where their stock dropped by almost 7%, the pixelation value will increase dramatically to the point where Facebook will be unusable as all media content will be completely indecipherable. When Facebook’s stock is increasing in value, the filter will not affect the content of the user’s news feed and there will be no distortion or pixelation to the images as a result of Facebook’s profits.

The Facebook Filter attempts to highlight Facebook’s business model, and make a political and ironic comment about how Facebook really feels about its users when directly compared with its shareholders, in a playful and mildly irritating way. According to Facebook’s founder, Mark Zuckerberg, the site thrives by giving its users the “power to share and make the world more open and connected”, in fact, their mission statement is to “bring the world closer together”. The Facebook Filter questions this mission statement, questions what their true mission statement could be and who Facebook is really serving. The Facebook business model relies on the network effect, the more users that Facebook has (the more users data Facebook has), the more likely businesses will be attracted to advertising on the platform, and the more money Facebook can charge them to advertise on Facebook. This is how Facebook is able to offer the service to users free of charge, because, you become the target of advertisers and advertisements as a condition of using the service. There is no opt-out option, you are also obligated to give away your data and privacy as a result of using the platform because as a prerequisite to signing up to Facebook, you agree to their privacy policy. Mark Zuckerberg is quoted as saying “Privacy is no longer a social norm”. A lot of users of Facebook are not aware of what they are agreeing to when they sign up to use Facebook because most users do not read the privacy policy agreement, nor do they understand facebooks business model. Hito Steyerl in her essay, ‘A Sea of Data: Apophenia and Pattern (Mis-)Recognition’ states, “Analysts are chocking on intercepted communication. They need to unscramble, filter, decrypt, refine, and process ‘truckloads of data.’…Even WikiLeaks Julian Assange states: “We are drowning in material.” I feel a similar way about Facebook, it is just a dumping ground for internet diarrhoea, this was another motive for filtering it, to make a statement and question its importance, its purpose in society and how much we rely on it and check it on a weekly, daily, hourly basis. I like the idea of people using Facebook less as a result of the Facebook Filter, and if I could improve it, I would add a feature which showed the user how much time they’ve saved as a result of using the Facebook Filter. It would concatenate the amount of time when the pixelation was severely distorting the content on the news feed, to the point where it was unusable and then storing that data and presenting it in the extension panel, to show the user how much time they’ve saved not looking at or using Facebook.

The initial idea of the Facebook Filter was to pixelate, distort and redact all the user based content for example; photos and videos of friends and family, birthdays, status updates and relationship announcements, but leave all the paid sponsored posts and advertisements untouched and unaffected by the filter. However, there was no way of clearly identifying the difference between this content, because it’s so integrated with all the personal content on the news feed, and it isn’t labeled or tagged as an advertisement in the code. Therefore, it would be almost impossible to large-scale pixelate personal content and not the advertisements. In an attempt to isolate the user and make them focus on ‘the business’ side of Facebook and illustrate their business model to the average Facebook user, through directly showing the user what Facebook wants you to see when they want you to see it.

TESTING PROTOTYPES: Due to the complex nature of this project, it required us to produce and test several prototypes for each element of the extensions functionality. This initially involved familiarising ourselves with producing simple Chrome extensions that carried out a simple function like changing the background colour of a webpage. Then producing a chrome extension that injected a fixed div over the top of a webpage, that would become the Facebook Filter panel. I then integrated the live stock market API feed into the extension through the manifest.json file and displayed that inside the div panel. We then produced an extension that replaced all the images on a webpage to pictures of kittens, because we needed to target all of the img tags on the Facebook news feed so that we could add HTML 5 canvas onto image tag, then pixelate the image accordingly.

CHROME EXTENSION TESTS:

 

HTML 5 CANVAS TESTS:

Above: Shows my workings attempting to determine how the percentage change in Facebook’s stock would translate into a pixelation value of the HTML 5 canvas, I needed to produce a maths formula that the Percentage Change in FB stock (x) would be fed into and it would spit out the corresponding level of pixelation. To do this, I experimented with the HTML 5 canvas pixelation test (see above), and changed the pixelation value. I then wrote a list from -1% to -10% I decided that -10% is the very worst Facebook stock could fall, as during the ‘Cambridge Analytica’scandall it fell by 7% and that was possibly the worst PR facebook has ever received. I then accompanied the pixelation percentage with a pixelate value and requested the assistance the help from a friend who studies Mechanical engineering, he took the corresponding values I showed him and produced the formula:

0.11+(x/100)

which I then used in an IF statement in the code for my HTML canvas.

 

WEBSITE: Below you can see screenshots of the website I built to accompany the chrome extension, using Hype and a Bootstrap framework. I wanted to link to the website from the Facebook Filter extension panel. Check out the live version of the website here.

FINAL FINISHED EXTENSION:

Click here to download the ‘Facebook Filter’ Chrome Extension now!

Below: Shows an animated GIF of the ‘Facebook Filter’ being installed, turned on and working, by filtering the image content on the Facebook feed.

Below: Shows a visualisation I made of how I envisage a more final polished version of the extension panel to look if I had more time to spend on the project. You can see from the visualisation it is pixelating the images on the news feed in relation to the stock prices.

Click here to download the ‘Facebook Filter’ Chrome Extension now!

Project progress…

Image by felipestoker on Reddit.

FURTHER RESEARCH: Just after publishing my previous blog post for this project, Facebook has been in the news a lot lately, surrounding the ‘Cambridge Analytica’ data breach scandal. Which prompted a movement to #deletefacebook and lead to Mark Zuckerberg testifying before the US Congress and being grilled by senators, about the privacy of users data used by Facebook and the transparency of their business model. As well as the ease of third-party apps like the one used by Cambridge Analytica, which allowed them to have wholesale access the data of Facebook users and their friends, in order to publish specific adverts to influence elections and political decisions. When the news broke about the scandal Facebook’s stock dropped drastically almost overnight, to be specific on March 19th Facebook’s stock dropped by 6.92%, which is the biggest single-day slide in their stock market value since they went public in 2012.

Therefore, it is very fitting that we are producing a project that investigates and explores the relationship between Facebook’s business interests, versus its user’s experience. Their business model fundamentally relies on its users using the platform, however, it also relies on harvesting that information that we upload onto Facebook and enabling businesses to access it, to create targeted advertising. There is a paradox around the idea of who Facebook is really serving, this is what the extension aims to exemplify. If the business model is working and Facebook is making money the experience is as normal, if they are losing money, however, you are going to suffer as a result and be presented with a more distorted experience dependent on the level of stock market value decline. As a result of the Cambridge Analytica Facebook scandal and Mark Zuckerberg in the press a lot recently, there have been a lot of artists, journalists and creatives producing content in response to the current situation. I feel like it is worth showing these below, even at this late stage in the project, as they still serve as sources of inspiration and are relevant to our project. Will and I have curated some of our favourite pieces we found both artistic and academic references, which we will draw upon to further inform the work we produce for the project.

Andrei Lacatusu – Social Decay: ‘Social Decay’ is a series of 3D renders by Romanian artist Andrei Lacatusu, showcasing a dystopian post-social media world. The images highlight American roadside style signs for social media giants; Facebook, Instagram, Twitter, and Tinder, each positioned in a setting that points to the demise of social media, or a society that has left the platforms behind. Check out the project here.
Joachim Bosse – Mark Zuckerberg, give me 1 million USD for this piece – or all my data back:
Find out more about the piece here.

 

Beeple – Facebook Registration 2063: Below shows a static graphical render depicting a dystopian future set in 2063 by graphic artist Mike Winkelmann, otherwise known as Beeple, he depicts Facebook registration in the future as a physical process here

Beeple – Facebook default privacy settings 2038:

 

THE WEEK Cover – Illustration by Howard McWilliam (2018): 

 

Google Trends Graph – Internet Communities Popularity: Below shows a graph plotting all the major social media websites popularity from 2004 – 2018 based on Google trends, it clearly shows a rapid decline in the popularity of facebook from 2012 – 2018. Compared to Instagram that has seen a steady, consistent incline in popularity since it was established in 2010.

 

Banksy – Coney Island Avenue: This Banksy piece appeared in an abandoned area on Coney Island Avenue. The piece looks reminiscent of the work of artist, Kara Walker and features a contractor or businessman lashing out at a small group of; kids, women and an elderly man and dog, holding a stock market arrow. The piece seems to me to be a metaphor for the negative repercussions of capitalism and big bisness developments, and the damaging effects that redevelopments can have on driving out established communities. It seems fitting that the piece of street art was painted on a wall in an abandoned lot a short distance from Manhattan, the home of international finance trading.

 

News Feed Eradicator – Chrome Extension: The News Feed Eradicator Chrome Extension helps you cut out the crap of your Facebook feed and get back to communicating with friends over the internet. It deletes all the content from your news feed and replaces it with an inspirational randomly selected quote, you can also add your own quote. Check it out and find out more here.

 

VOX – Why you keep using Facebook, even if you hate it: An interesting video that explores the idea that, the network effect is Facebook’s biggest selling point, and the root of many of its problems, because “Facebook has created a network effect on steroids”.


TESTING PROTOTYPES:

Check out the live prototype here.

Check out the live prototype here.

Kitten Chrome Extension Example:

Check out the live prototype here.

Transgressing Boundaries Process…

BRIEF: (Re)view Boundaries “Life will not be contained within a boundary, but rather threads its way through the world along the myriad lines of its relations” Tim IngoldCreate a mobile experience that explores the political, personal, physical or psychic conditions of the site of a boundary in Bristol. Who defines the edges? What propels us to move across borders? How do people transgress the limits set-out?


RESEARCH | Sources of Inspiration:
Duncan Speakman – A Folded Path: 
A big source of inspiration for this project is Duncan Speakman and Circumstance, specifically the projects, ‘A Folded Path’ and ‘Of Sleeping Birds’ which are pedestrian speaker symphonies, soundtracks for cities, carried through the streets by a participating audience, experienced by everyone it passes. Comprising of 30, custom-built, location sensitive portable speakers each playing a different element of the music. The audience, divided into groups, takes a different route through the city. The speakers are highly directional so the movement of the people within the group changes the acoustic relationship between them, the audience becomes the orchestra. Learn more about the project and Circumstance here.

Janet Cardiff & George Bures Miller – The City of Forking Paths: Another source of inspiration is Janet Cardiff & George Bures Miller’s audio visual walks, specifically their 2014 piece titled ‘The City of Forking Paths’, which used an iPod Touch to visually and aurally navigate participants on a route in Sydney, Australia. As you walk, you follow the audio and video on the screen, which was previously recorded from the same location. The voice of Cardiff leads you, and they have staged scenarios in the video such as; incidents, performances and musical experiences for you to discover along the way as you reflect upon the history worn into the streets. Also, see below the video of a similar piece, ‘Alter Bahnhof Video Walk’ below. Check out their other work here.

Rik Lander – Haply Headphone Experiences: Haply headphone experiences are app based audio dramas produced by Rik Lander. The soundtrack guides participants through a stimulating mini-drama. What everyone hears is synchronised, but may be slightly different, which results in funny, thought-provoking moments for entertainment or team-building. They are aimed at conferences to help break the ice and helps delegates become more empathetic and open to new ideas. Check out their website here.

Iain Borden – Skateboarding, Space and The City:

IDEA PROPOSAL:

Below: You can see a series of images showing us interviewing both skateboarders and members of the general public at the Bearpit in Bristol city centre, a popular space for both skateboarders and members of the public, as there are a number of obstacles to skate and it is an underpass for people to walk under the busy roundabout to quickly get across town. We interviewed approximately five skateboarders and five members of the general public and asked them a series of questions, which we decided on before we went out (see above). We wanted to attempt to gauge an understanding of the kind of responses we might get when it comes to recording the final documentary, whilst also giving you an idea of what the audio documentary will comprise of and what we are trying to make. To do this we have cut together a short pilot with some of the best responses we got from both the skateboarders and members of the public at three of the main skate spots in Bristol; The Bearpit, The Memorial and Lloyds. You can listen to this short pilot below.

PRESENTATION FEEDBACK:
WIREFRAMES: Below shows a series of wireframes we produced to give an idea of what the user journey of the app would look like, from the start of the locative audio documentary experience to the finish. More importantly, we thought a lot about how we were going to direct the user of our app experience to the specific area of Bristol which they would need to be in in order for our documentary to be as immersive as possible. There were three skate spots in the centre of Bristol which we wanted to target; The Bearpit, The Memorial and Lloyds, and coincidently and rather fortunately for us, these three spots are one linear walk from one side of Bristol to the other. We initially had three ideas about how the user would experience the app and how the interface would accommodate this. Firstly, we considered having a map page with all three spots for the user to select, then visit that spot with the aid of Google maps directions. Then play that chapter of the audio documentary exploring that skate spot, listening to the views and opinions of the skateboarders and members of the public that interact with that space, that we will collect. However, the problem with this idea is, we wanted the app despite being broken into three sections, one for each skate spot, to have a clear structural narrative, a beginning middle and end. This would be difficult to achieve if the user was able to select any spot at random, we could have made a narrative arc for each skate spot, but I don’t believe that it would have felt as complete without referring to the other spots. Our second idea about a potential user experience was, to make one long audio piece with a clear narrative structure and conclusion, instruct the user to start at the Bearpit and use the map to navigate to the next spot when we instructed them to do so, but this seemed like a logistical nightmare because of the getting timing right and everyone taking part having different walking paces. Finally we settled on an idea based on our initial thought, we will show the user a map with only the first spot we want them to go to, they would then be encouraged would tap that skate spot, play that chapter of the audio documentary, then return to the map page where the second spot would appear to select and then once they have listened to that, the final spot would appear on the map, concluding the documentary. This way, they could only go to the spots in the linear order in which we intended, and they could play the audio once they were ready and arrived there, they wouldn’t be rushed buy our sense of timing like they would in our second idea.

We initially had three ideas about how the user would experience the app and how the interface would accommodate this. Firstly, we considered having a map page with all three spots for the user to select, then visit that spot with the aid of Google maps directions. Then play that chapter of the audio documentary exploring that skate spot, listening to the views and opinions of the skateboarders and members of the public that interact with that space, that we will collect. However, the problem with this idea is, we wanted the app despite being broken into three sections, one for each skate spot, to have a clear structural narrative, a beginning middle and end. This would be difficult to achieve if the user was able to select any spot at random, we could have made a narrative arc for each skate spot, but I don’t believe that it would have felt as complete without referring to the other spots. Our second idea about a potential user experience was, to make one long audio piece with a clear narrative structure and conclusion, instruct the user to start at the Bearpit and use the map to navigate to the next spot when we instructed them to do so, but this seemed like a logistical nightmare because of the getting timing right and everyone taking part having different walking paces. Finally we settled on an idea based on our initial thought, we will show the user a map with only the first spot we want them to go to, they would then be encouraged would tap that skate spot, play that chapter of the audio documentary, then return to the map page where the second spot would appear to select and then once they have listened to that, the final spot would appear on the map, concluding the documentary. This way, they could only go to the spots in the linear order in which we intended, and they could play the audio once they were ready and arrived there, they wouldn’t be rushed buy our sense of timing like they would in our second idea.

Our second idea about a potential user experience was, to make one long audio piece with a clear narrative structure and conclusion, instruct the user to start at the Bearpit and use the map to navigate to the next spot when we instructed them to do so, but this seemed like a logistical nightmare because of the getting timing right and everyone taking part having different walking paces. Finally we settled on an idea based on our initial thought, we will show the user a map with only the first spot we want them to go to, they would then be encouraged would tap that skate spot, play that chapter of the audio documentary, then return to the map page where the second spot would appear to select and then once they have listened to that, the final spot would appear on the map, concluding the documentary. This way, they could only go to the spots in the linear order in which we intended, and they could play the audio once they were ready and arrived there, they wouldn’t be rushed buy our sense of timing like they would in our second idea.

Finally we settled on an idea based on our initial thought, we will show the user a map with only the first spot we want them to go to, they would then be encouraged to tap that skate spot on the map, play that chapter of the audio documentary, then return to the map page where the second spot would appear to select and then once they have listened to that, the final spot would appear on the map, concluding the documentary. This way, they could only go to the spots in the linear order in which we intended, and they could play the audio once they were ready and arrived there, they wouldn’t be rushed buy our sense of timing like they would in our second idea and we could also produce a traditional narrative documentary structure setting out our intentions in the first chapter, showing both sides of the arguments throughout the first, second and third chapters and concluding in the final chapter. By employing this method of hiding and systematically revealing each skate spot and consequently each chapter of the audio documentary we are simultaneously guiding the user around the city it in the most efficient and linear way, whilst also guiding them through the audio narrative of the documentary.

LOGO:

Analysis of results…

As a result of the responses and feedback, we got from our online questionnaire, which we built using surveymonkey.com, where we asked members of our target audience questions about certain aspects of the concept and potential functionality of our app. One of our main concerns with the app was addressed in the first question which was; ‘Would you be willing to allow our app to use you’re personal; photos, text messages, contacts and location as part of an interactive drama?’ Our main worry was whether users would feel comfortable giving away their personal data in order to be apart of an interactive app based drama. The results were rather concerning as 57% of people said no. However, I personally believe that the question could be re-phrased in a way that may make the user feel more comfortable about giving their data away. Firstly, it is not clear in the question that they are going to get something back in the form of an amusing and entertaining video, which incorporates their photos, text messages, contacts and location into the narrative of an interactive story. Secondly, we would not be asking for all of the things in the question at once to avoid overwhelming and bombarding the user with pop-up permission requests. Finally, if a user downloads the app they should be aware of the concept and should have a general understanding of how the app works before downloading it, and should hopefully be aware they will be obliged to give away a small portion of they’re data in order to interact with, enjoy and be entertained by the app. For example, many people don’t think twice about accepting pop-up permission requests asking to access their microphone and photos, in exchange for a service or social media apps like Uber or Snapchat. Therefore, I believe that in reality people would be willing to give away their data once they’ve downloaded the app, for the simple reason that, they would not download the app without being made aware of the fact, it is an integral element to the app’s functionality that they give away some of there data, and if they don’t feel comfortable doing this they wouldn’t download the app in the first place.

ADDITIONS & CHANGES TO PITCH DOCUMENT:
The main change we made to our strategy in response to the testing results we received, to combat people concerns about giving away their data in exchange for an interactive app based drama. We decided to be super transparent and play on this idea about how ‘The Data Intrusion’ will use your data, so we turned the app requesting your permission to use your photos, into an animation and put it at the start of a campaign trailer for the app. We also produced minimal static advertising campaigns, simply featuring an IOS style permission box, and played on the idea of, do you dare to give your data away to The Data Intrusion? I believe that this level of transparency and openness combats users reservations about giving their data away for an interactive dramatic experience.

The other thing that testing informed us on when producing our pitch document was, on ‘How many times would you be willing to interact with the drama; if there were alternate storylines?’ the outcome was a 50/50 split between ‘3 times’ or ‘5 or more times’ we chose to go with the lower number of 3, and consequently produced a flow chart with 3 disperate storyline naratives interwoven with one another and from there produced a series of wireframes to visualise the flowchart story in the form of the interactive interface of an app.

Another change that we decided to do on the pitch document is that changed our client from Arts Council England to the app development company, Six to Start because we felt there was more of a connection between our app and this company. We decided to target the pitch document at Six to Start, mainly because the company could be the potential producer for our app, and they have a lot of experience in the field and are well renowned for producing high-quality apps and experiences, I believe they are a much more fitting client than Arts Council England funding.

Individual Proposal and Project Development…

Above/Below: Shows some slides from my presentation including, some logos and branding I produced for the project. I took the design aesthetic and branding from facebook its self so that the extension would look like it could be made by Facebook, by adding another f to the Facebook logo then pixelating it and adding the word filter after facebook in facebooks distinctive font, but then obscuring it with a pixelation just like the facebook filter will with photos on your Facebook feed. The presentation also includes visualisations of how the extension will look in the browser, with a live stock-chart showing a live feed of Facebook’s current up to date stock market value, and it will also show how there profit or loss translates into a pixelation value of the images on your feed. As well as an on/off button to turn the Facebook filter off when the user wants to return to using Facebook normally, without having to turn the extension off manually in the extensions manager. There is also a visualisation of how the Facebook Filter will affect your Facebook feed, and how it will pixelate all the images except the images used as Facebook advertising, as a way of shamelessly satirising how Facebook makes its money, and to show its not about the sharing of photos and the connections with friends and family on Facebook, it is actually a tool for collecting data to sell to advertisers to increase Facebook’s bottom line. I include references to artists such as Ellie Harrison, who made a vending machine that was reprogrammed to release snacks only when news relating to the recession made the headlines on the BBC News. James Bridle’s browser extension titled Citizen Ex, which was a big inspiration for this project. Citizen Ex is a browser extension that tracks your browsing history and based on the IP addresses of the websites you visit, gives you an algorithmic citizenship based purely on where you go online. Finally, Edmond Clark, who is a British photographer whos series titled Negative Publicity, consisted of photographs documenting top secret locations where people were held by the state, some of which, like the one below, had to be pixelated and redacted for obvious security reasons. Below you can also see that I sourced a live API feed in the form a JSON file of facebooks current stock market financial information. I also found a javascript slider that allows me to increase the pixelation value of an image, and I have experimented with building and developing my own chrome extensions using the extension developer mode in chrome. For example, building an extension that will change the background colour of any webpage with a simple click, this is surprisingly simple and only involves a few lines of code and works on most web pages.

RESEARCH | Sources of Inspiration:
Ellie Harrison – Vending Machine:


James Bridle – Citizen Ex:

Edmund Clark – Negative Publicity:

Mishka Henner – Dutch Landscapes:

IDEA PROPOSAL:



WIREFRAME:

GRAPHIC VISUALISATIONS:

LOGO:

UX Testing – ‘The Data Intrusion’ Questionnaire…

For our first set of user testing, we wanted to gauge an understanding of the reception our app would receive and the potential of the app idea amongst our target market. We produced a survey using surveymonkey.com, you can see the results below.

RESULTS: 21 participants took part in the survey

Q1: Would you be willing to allow our app to use your personal; photos, text messages, contacts and location as part of an interactive drama? [YES/NO]
9 YES and 12 NO
This is a concern for our project however, I believe that if the question how clearer at excentuating the fact that the app would only access them once to use in the drama and maybe only ask for users photos instead of text messages, contatcts and location aswell.

Q2: Would you watch a drama on your phone/tablet? [YES/NO]
16 YES and 5 NO
This is a positive result for our project.

Q3: What genre of film do you find most engaging? [Thriller/Action/Horror/Comedy/Documentary/Crime/Mystery/Drama]

Q4: How many times would you be willing to interact with the drama; if there were alternate storylines? [Only Once/2 Times/3 Times/4 Times/5 or more Times]

How many times they will try to play the game if this had alternative endings, the majority replied 3 times while immediately after, the 2nd answer was 5 times or more.

Q5: Would you be willing to pay money for this unique experience? [YES/NO]
10 YES and 11 NO

In the question If they would pay to play this game, the answers were totally divided, 10 yes and 10 no.

Q6: Based on all of the above would you be interested in downloading this app? and Would you suggest it to a friend? [No (I would not download it)/Yes and I would suggest it to a friend/No and I would not suggest it to a friend/Yes and I would not suggest it to a friend/Yes (I would download it)/No and I would suggest it to a friend
15 Yes and I would suggest it to a friend]

In the question if they will download this app, and at the same time if they will recommended it to their friends, the answers were overwhelmingly yes, with 14 yes and 6 different answers.

See in-depth results analysis here.

Conclusion:
To conclude

Postcard Texts

BRIEF: Each week you will be asked to make a postcard that is relevant to a piece of literature. This is a postcard-sized image that represents an element of the text, with a relevant quote on the back. You must submit a post that documents the postcards and explain why you chose the postcards, and how the texts you chose connect to your practical work in a blog post.

The postcard part of your portfolio will be assessed on the effectiveness of your attempts to do the following:
Research: Engagement with relevant theoretical and design resources, engagement with debates around coded spaces and objects, initiative in finding appropriate resources?

JASON FARMAN POSTCARD:farman_1

farman_2The first postcard I produced for the series was in response to a quote from the first chapter of Jason Farman’s book ‘Mobile Interface Theory: Embodied Space and Locative Media’ the quote which can be seen above is, “The mobile phone is now deeply woven into my everyday life, and I’ve become so comfortable with the ways I use it that I have gotten to a point where I don’t think of my mobile media practices as noteworthy.” I like this quote in particular because it personally resonated with me, I’ve grown up with the internet and smartphones and it doesn’t feel as though I have adopted the technology it’s just always been there. It’s easy to look around and see the permanent attachment we all have to these small computers which we carry around in our pockets, they are so ingrained within our everyday lives that they almost feel like an appendage, like they are part of us. We rely so heavily on phones in modern day society, probably more so today than when Farman was writing back in 2012, I think Farman sums this up in a really powerful and eloquent way in this short quote I selected for my postcard.

For the image on the postcard, I wanted to illustrate a powerful image to accompany the quote on the back, so I drew a loose sketch of a smartphone in someone’s hand, but with the phone handcuffed to their wrist, to accentuate the idea of our reliance and our attachment to these devices that are so prevalent in our lives. The handcuff has connotations with law and order and being arrested and I feel like an illustration that depicts someone who is handcuffed to their phone, gives the viewer the impression, that the person in the image has been attached to their phone against their will. It feels like we have to have a smartphone in modern day society to function or we are consequently left out of many perks, such as; fast communication, transport services, discounts and digital media consumption. Then once we have a smartphone we are trapped in the ecosystem, and we can’t get away from it because we have become too reliant on phones in living our day to day lives.

The caption on the phone screen reads “DO NOT QUESTION AUTHORITY” this is a direct reference to the 1988 film, ‘They Live’ where the main character, Nada discovers a pair of sunglasses capable of showing the world the way it truly is, and when he looks at a magazine through his sunglasses, it reads “DO NOT QUESTION AUTHORITY”. I thought this was a powerful message and was relevant to the mobile phone because it feels like smartphones are arms for governments and corporations to reach out to us, feed us information and monitor us. This seems like it is the status quo and just the way things are and we shouldn’t question that. I believe if smartphones were featured in the film ‘They Live’, “DO NOT QUESTION AUTHORITY” would be the message that Nada would see through his sunglasses hidden behind the shiny touchscreen exterior.

ROLAND BARTHES POSTCARD:barthes_1

bathes_2The second postcard I produced for the series was in response to a quote from the start of Roland Barthes book ‘The Death of the Author’ the quote which can be seen above is, “the reader is a man without history, without biography, without psychology; he is only that someone who holds gathered into a single field all the paths of which the text is constituted.” I chose this quote because I thought Barthes makes an interesting point, I also think it’s intriguing that although ‘The Death of the Author’ was written before the proliferation of smartphones and portable mobile media, I think that the statement holds true for users of smartphones today. To take Barthes quote and update it for today’s society, I believe that a user is someone who holds gathered into a single field all the paths of which the phone is constituted. The user of a smartphone is the epitome of a constitution of data or as Gilles Deleuze describes in his journal article Postscript on the Societies of Control, ‘Dividuals’.

For the image on the front of the postcard, I wanted to attempt to depict the reader that Barthes describes, a man who is nothing but a constitution of the whole text. To do this I did a sketch of the outline of a man sitting cross-legged reading, he has no clear features the only thing that’s defined are the clothes he is wearing, which are a suit. I then filled his whole body in with words and text, specifically the actual text from ‘The Death of the Author’ so my illustration is a depiction of the metaphor of ‘the reader’ in which Barthes describes in his book.

QUENTIN STEVENS POSTCARD:stevens_1

stevens_2For the third postcard I made for the series I produced an image in response to a quote from the second chapter of Quentin Stevens book titled ‘The Ludic City: Exploring the Potential of Public Spaces’ the quote I selected which can be seen above is, “Play often runs against orthodoxy, ignoring the systematic organization of human activity, and transgressing the boundaries of seriousness, including taboos.” I chose this quote because I feel that it epitomises the power of a game, I believe that playing a game has the power to break down barriers and relieve tension between people. I think this is why we tend to play games at the start of conferences, to get people to break down their inhibitions by playing a silly game, then people are more likely to open up if they have already made themselves look silly. Especially when we play games in large groups because we conform to others behaviour and are more likely to get involved if everyone else is taking part and enjoying the game.

For the image on the postcard, I illustrated a professional looking woman who could be on her way to work, carrying a briefcase and behind her on the ground is the shadow of a child playing. My thinking was that the shadow is a younger version of herself, playing as a child, but also her inner self that has the desire to play but is being repressed when going about her professional daily life. The image is a metaphor to embody the quote and to articulate the idea that no matter has serious we are, play is capable of taking us out of our orthodox and sensible day to day life and returning to our childhood state of play where we have no inhibitions about it. The idea of using the shadow as a metaphor was intentional because we all have a shadow, therefore, we are all capable of ignoring mundane life and transgressing boundaries and playing no matter what our age, race or gender may be.