For this last post I plan to evaluate and reflect upon my honours project and the work I have carried out over the past year. I will discuss the positives and negatives of the past year, as well as commenting on the skills and knowledge I’ve gained in the process. I will also detail some of the bigger issues that arose during the project and how they were worked around. While I may be repeating some of the points made previously, I feel it is important to restate them in order to provide a more complete reflection.
As the honours project is essentially an open brief I decided that I would like to look at the technical use of sound in games to some degree. I chose this as I felt that research into emotional feedback and experience relating to sound was something that had been covered in great detail before, while the technical applications of game audio was still an under-researched field. Originally I had wanted to look at how sound physically interacts with an environment and if it could be used as either a navigation aid or as a game mechanic, something akin to the game “Blind Man’s Bluff”. I also wanted to combine this with a visualisation of the sound waves interacting with the virtual environment, but quickly realised that this idea was far too big to try and attempt. Instead I had hoped to work my honours project into another brief I was working on for the Make Something Unreal competition. This would have mean that I could apply my theories directly into a game, and not have to worry about developing the entire game artefact myself, and increase my knowledge of Unreal Development Kit (UDK) in the process. As the team had decided on having as minimal a visual HUD as possible I discussed the possibility of using audio in its place, which eventually lead to my main aim of:
“Determining if audio can replace traditional feedback methods such as a visual Heads Up Display (HUD), compass, minimap, objective markers, etc. in a 3D game through the use of environmental and music cues and contextualised beacon sounds, and allow a player to navigate through a 3D world unaided by additional visual information.”
Unfortunately the team didn’t manage to make it to the final of the competition, and for the most part went their separate ways. Two of the coders agree to help me develop something for my honours project, which took off some of the burden of actually producing an artefact, although only one was able to produce anything as the other’s workload became too great. We decided to use Unity3D as the game engine as all of us had experience in using it, and it allows for fairly rapid development.
This was probably the best choice for the project, as the coder and I found UDK fairly cumbersome to use. In particular I found its audio implementation capabilities hampered by a poor interface, which made adding and testing sound sources cumbersome. While Unity3D’s audio engine is slightly weaker by comparison, my familiarity with it and the ability to test and tweak audio parameters quickly made it the better tool to use. This choice did limit developing the game to a single room of the university which featured the Pro version of Unity3D that provides access to that majority of the audio tools that would be needed, such as filters and custom reverb zones. We also tried to implement the audio middleware package FMOD into Unity3D through the Squaretangle plugin, as it is a far more powerful, far more flexible package. It would also have allowed for a more flexible development process, as we could have transferred to the standard Unity3D engine that we both had free access to. Sadly, the lack of documentation slowed down this process, and despite some initial success we failed to get FMOD working properly in Unity3D. Despite this the process was still worthwhile, as it allowed me to delve further into using FMOD and gain a better understanding as to how it is coded into a project. Similarly I developed my knowledge of Unity3D beyond its audio engine, learning how to implement various basic elements and pre-fabs.
The game looked primarily at auditory navigation through a 3D space through the use of a call and response mechanic. This also allowed for some investigation into the semiotics of sound design, how players perceive the meaning of game world objects through audio cues. I also produced an additional experiment to gauge if Head Related Transfer Functions (HRTFs) could provide an audible difference in sound perception.
This was probably the biggest challenge for me, as I had never looked at HRTFs, auditory navigation, semiotics or game design before. Understanding the basic concepts and functionality of HRTFs and semiotics especially took some time, as the latter has centuries’ worth of research and theory behind it. I feel that trying to integrate all of these points into the project was somewhat overambitious, even if they were necessary in proving results.
I was fortunate to find the Slab3D suite which allowed me to test HRTF functionality without having to create my own, and generally experiment with them so that I could tie the theory and practice together. This was also a bit of a turning point for the project, as it had merely assumed that people could perceive 3D sounds properly, and didn’t take their listening habits into consideration. I decided that this was worth investigating to some degree, and while the number of participants could have been higher the results hinted that HRTFs improve sound localisation to some degree.
The game aspect of the project was a mixed bag. It was massively overambitious and I feel I could have improved and refined the designs further if my time management was not so poor, as they made sense on paper but not in game. The cave section in particular was a particular bugbear during development and was the cause of the biggest pipeline issue. The environment mesh of the cave was considered to be one collideable object in Unity3D as opposed to separate walls and floor. Trying to implement different collision sounds for bumping off walls and footsteps was causing the coder some major issues, until we decided that the quickest, albeit somewhat lazy and broken, way to solve the issue was to lay an invisible floor on top of the original to serve as a different collider for the footsteps. While it does work, there are some clipping issues that are caused when the player jumps up platforms. The pathfinding idea in the caves level didn’t work as planned either, and ended up confusing and frustrating participants more than anything else. I decided that, rather than keeping it as a complete negative, it would be worth investigating why this was the case, and as such used it as part of an analysis on semiotics in sound design. The desert and forest levels managed to serve their purpose, although the former may have been to large an area for some.
Reflecting on the project as a whole, I would have liked to have spent more time into actual game design so that the game artefact was slightly more enjoyable as an experience during testing, and allow me to see what gameplay elements could benefit from audio feedback more. I still believe that my ideas are worth investigating further, although perhaps through better or more refined methods. Similarly, if I were to try and carry out the project again I’d ensure that a suitable timeframe was established and ensure that there were enough team members to carry it out.