I have now finished my processing project and conducted testing on it to ensure that it performs the way that I intended it to. As you can see from the video that was posted earlier, when it is in the space it listens to the sound in the environment and reflects what it hears onto the display on the screen. The people who walked past it on the whole didn’t interact with it at all, a few noticed it but didn’t hang around long enough to understand what it was trying to do. I think this was because of the time of day I decided to test. I tried to coincide my testing with the time that people would be finishing lectures and leaving the building by walking out of the building through the space, and therefore facing my screen. What actually transpired was that the majority of people traversing the space were coming from the opposite direction and didn’t see the screen at all and those that did were on their way to lectures so didn’t have time to stop and interact with it.
Some however did interact with the piece to some degree, although I still don’t think they understood what it was doing and I think they may have been a little unnerved by me pointing a camera at them. This may have skewed how they would normally have interacted with the display. If I wasn’t there or wasn’t as obviously filming then I may have got different reactions.
We weren’t very subtle with our positioning of the camera.
Below I have included a link to the GitHub repository for my processing work. I have omitted the twitter keys that are required for it to work as they are personal to me and could me misused by people on the internet and leave me liable.