Given our experience with Such Tweet Sorrow several years ago, I was both interested in the project and keen not to be too critical of it. I remember how sensitive many of us were to what looked like stupid criticism from some people who were really creating their own bandwagons and weren’t giving themselves a chance to see all the ambitious things we were doing with the project: its live-ness; its “acted-ness”; its woven appearance into just about every emerging or emerged social platform we (mainly the wonderful Tim Wright) could think of. Not to mention all that interactivity.
We had to listen to stuff about betraying Shakespeare’s poetic language, as if Shakespeare was both utterly sacrosanct and terribly vulnerable.
So I wanted to give what sounded like another bold literary experiment with social media (and Lord knows that there aren’t many of them) a chance. But there were things in the Mitchell interview that just felt wrong. He was so guarded – so priggishly above the medium he’s chosen. He doesn’t tweet. “I’m not really a social media animal… I don’t want to add to this ocean of trivia…” If it was just a marketing ploy he wouldn’t do it… “I don’t want to feel like a gimmick chaser”. Already, this doesn’t sound bold – just cautious.
Then the thing itself. Again, a minority of critics fell on the first day or two of Such Tweet Sorrow without giving it a chance to get into its stride for the best part of another 5 weeks, so I am reluctant to cast aspersions, but…really? Really?
He has simply crafted a narrative in a series of tweet-sized passages. And played them out over Twitter.
There’s a ten hour gap between two tweets that clearly describe the same set of instances. And then more this morning – still in the time frame of the story, but not of the medium.
Mitchell so far fails to realise that the quality of Time is crucial to Twitter – its currency, its spontaneity, its asynchronicity, its ability to be both live and of record. If you don’t play with all of this you might as well give up.
The Right Sort is not “of Twitter” in any interesting way at all. The only people the account is following are a handful of publishing (and Twitter) marketing types. It’s not that cynical to assume one of them is pushing out the content for Mitchell, out of the sausage machine.
Not that this delivery method matters – there is no interactivity to mention. No interactivity at all. In fact there is no argument for it taking this form other than that the hero experiences much of the story in bite-sized, trippy “pulses” and that the author found it very demanding.
His claim that he’s been through “escapological challenges posed by diabolical treble-strapped textual straitjackets” may be true for the experience of composition, but none of that jeopardy appears in the tweeted result.
Maybe he found it so difficult because he feels so lofty about the medium through which he’s chosen to push his message – and he hasn’t bothered to understand it.
From Mr Cloud Atlas, this is just disappointing. It’s not even a gimmick, and hardly a stunt. To me it reads terribly safe, maybe the most un-experimental thing of Mitchell’s I’ve come across.
We are just approaching the final phase of research into a game called Cold Sun. This is still very much in the realms of a prototype, however the initial results both from the point of view of the graphics and the game play are very promising.
The initial idea emerged from a drunken conversation with Canadian developer Jesse Blum and Mudlark founding member Rachel Jacobs while working in Brazil in 2012. We were doing some experiments in a farmhouse in the rainforest outside Rio de Janeiro, the isolation was quite profound, we were 3 hours walk from the nearest road.
We started riffing on – what if you woke up alone in a place like this with the rain coming down hard, as it does a lot in this part of Brazil. A lone survivalist, having to piece together what had happened to the world while they were asleep. This is such a classic trope so beloved of games and movies – the loner who wakes to find themselves in a world changed beyond compare. But what was attractive was not only this classic narrative structure but also how weather could play a vital part. Our time in the farmhouse was hampered daily by the pattern of heavy weather, sometimes it was so heavy it seemed to affect peoples dreams. So we wondered if there was a way we could create a game where real weather data was essentially invading the imaginary universe of a game. We also liked the idea of the player being unsure of who they are in the game. Neither male or female, man or beast, they must survive in order to find out the truth of who they are.
What we have come up with is a mobile game that takes real-time weather data from a player’s location and amplifies it in a game universe where climate change has drastically changed everything. Challenging the player to think adaptively, notice weather changes around them in the real world in order to plan their next move in the game.
Our research has taken us on many journeys into many approaches as to how you might build a game like this. We have gathered research with a number of partners including Lancaster University, Anglia Ruskin University and Candace Howarth a climate scientist currently working with the Department for the Environment. We have been given insight into stark futures of climate change and have factored some of this thinking into our game world. Topically this information is what is informing recent news reports about the future of the British climate.
We have distilled the research into a game that has 2 modes – dream and survival mode. In survival mode the character has awoken to find themselves on the point of death, they must make decisions in what is essentially a text adventure, to stay alive. Barely able to move and in their confusion and weakened state they pass out frequently at this point the game switches mode to dream mode. In this mode the game becomes an infinite runner where they have to dodge enemies and jump across strange spheres. In both modes aspects of the game play whether it is narrative features or game parameters are influenced by the state of the weather outside the players window.
We are releasing the game to a test audience in the next few months, they will test (alongside ratings of levels of fun and engagement) whether they feel a heightened awareness of the real weather as a result of playing the game. Bring on the rain!
So 2014 is here. A time to reflect on some of the research projects we have been doing in the margins of 2013.
In June Mudlark received funding from the TSB Feasibility Study fund to look at tracking user travel behaviour with the view to building game logic around it. This was a bit of parallel research designed to compliment our previous work on Chromaroma by extending the game beyond the constraints of fixed infrastructure in the form of smart cards and RFID terminals in stations and on buses.
We wanted to explore how a user or player could get an enhanced history of their travel in London and beyond by monitoring their movement and transport habits over time.
We were aware of the success of the Moves app and the effortless way it tracks the different modes of travel from walking, cycling, running and transport. It does it with a near miraculous level of accuracy. We wanted to go a step further and see if we could create a more granular level of detail than is provided in the “transport” type of Moves. We believed by developing our own system we could tease out the fine distinction between: train, underground, bus and car.
Working with the School of Electronic Engineering and Computer Science at Queen Mary University we made this happen by utilising and combining sensors and data available on current smartphones – GPS, Accelerometer. We developed an Android App based on a unique algorithm that measures movement patterns and corrects errors in classification through analysis of user movement patterns over small segments of time.
Our feasibility test around East London in December with 5 test subjects using Bus, Train, Underground and Cycle, correctly identified transport mode shift of the user with an overall accuracy of 78.5%. This was an amazing outcome for something that was basically only working with the data being generated by the phone from the testers movements.
We also wanted to augment the sensor algorithm with online look-ups of live transport data. To test this aspect we did quick development that used aspects both of the Moves API and the Transport API (a service which provides user routing on the UK’s transport network’s). We took my Moves data generated on my Iphone and then ran queries on the start and end points of journeys with Transport API’s routing service. This produced remarkably accurate predictions of user journey type down to Bus route or train line.
We ran across some issues with it falsely choosing bus instead of train and vice versa. We discovered accuracy was increased by choosing the fastest journey in any list of possible routes between A and B. This would obviously not always be the case. A user may choose slower journeys and so a future addition would include a method to track likely routes based on a user’s past history of travel and how often they may have travelled that route in the past.
We came to the conclusion that by combining the App with a query of the Transport Api we could reproduce a user’s journey on any transport type in the UK with a high level of accuracy. We hope to explore this more with a future iteration of the app and also integrate some game play in the mix. Watch this space as this develops during 2014.
Earlier in the year, we”d proposed a “perceptive” experience for the CBBC homepage, taking in data feeds to change how it was presented and incorporating playful elements (eg blowing away clouds if it was a rainy day near you), so Perceptive Media was a natural fit.
Breaking Out, BBC R&D”s original Perceptive Media experiment involved a two-character radioplay – one of whom was acted by a text-to-voice synthesiser – which uses data to adapt the content to the user”s context. eg using location data to fill in dialogue gaps such as weather or place names (“you can do anything, you can go to the Imperial War Museum” – if you”re in Salford/Manchester).
One of the main issues BBC R&D faced when they have tested the Perceptive Media radioplay is that the audiences could not separate the web-based content from the shiny screen – users would be waiting for something to happen on screen, rather than listening to the audio content.
Ian wanted to free the content from the screen, to see how audiences really reacted to Perceptive Media.
The possibilities of networked things and Perceptive Media are vast, opening up content to be ‘remixed’ live or contextualised for an audiences’ environment and context.
Ian”s brief to us was to create a domestic radio-style object that would playback the radioplay and pull in data to adapt it to the user”s context. This would enable audiences to act naturally around the radioplay content, and for BBC R&D to understand more about Perceptive Media and audience behaviour.
We decided that the object itself would provide further data feeds – using analogue electronics and sensors – in addition to the web-based data already being collected.
The Domestic Environment
Influenced by networked objects that fit seamlessly into the domestic environment and serve a natural function – such as Voy”s Ugle, Russell”s Bikemap, the Good Night Lamp and Skrekkøgle”s own radio object, Plugg - we wanted to ensure that the object was a part of the home, not an intervention. The Perceptive Radio is a behaviourally-driven design.
To do this, we spent time role-playing, listening to the radio, and documenting how we behaved around them – as well as what else was going on around the home. There were three key scenarios that most frequently played out:
Pottering and moving about in one room (eg making a cup of tea, walking to and from cupboards)
Pottering and moving in and out of many rooms (eg housework, gardening)
Settled down to properly listen to the radio (eg armchair with a cup of tea and pet)
The first two scenarios point at partial attention – with radio as an accompaniment – whilst the third is a clear, focused attention scenario.
The environmental/behavioural elements that were most frequently documented:
Time of day
Spikes in ambient noise (eg a telephone call, washing machine spin cycle)
Designing The Perceptive Radio
The design challenge was to demonstrate how a networked object could deliver tailored media experiences that are sympathetic to domestic environments, without being disruptive or jarring.
Marrying the contexts with environmental behaviours was the next stage in domestic design:
What would a user want a radio to do when they are moving around a room frequently?
What would a user want a radio to do when someone calls them up?
What would a user want a radio to do when they are settling down to relax?
In order to avoid any unnecessary diversions or wrong routes, we settled on four key design rules:
Behaviour of the audience as control input
The Perceptive Radio is to be designed as an object that is sympathetic to the domestic environment – something that sits comfortably and naturally in a home – and the behaviour of the listener affects the output. Passive, not active, inputs.
No physical interactions
Whilst the box itself has many affordances that could be used as sensor inputs – such as orientation, angle, touch – it was key that the object itself would be secondary. A function, rather than a tool. So, beyond obvious controls (on/off, play, master volume), the user”s behaviour and environment would be the only control inputs.
As the user is the control input, the controls have to be meaningful. It is important that those controls are behaviours that already exist, and not attempt to create a new invisible control system. We focused on ambient sound and natural movement rather than forced gestures.
For the Perceptive Radio to fit comfortably into a domestic environment, the output (effects) from the interactions must be believable. There are many variables that could be mapped to interactions – such as pitch or dialogue pace – that would be jarring to the listening experience, rather than being sympathetic to the audience. All effects must be believable and beneficial to the user.
Four Design Scenarios
1. Pottering and moving about in one room: making a cup of tea.
Change in volume.
In this scenario, the user is moving around the kitchen, boiling the kettle, collecting mugs and milk whilst they listen to the radio. As they move away from the radio, the volume changes so that the user can hear a consistent volume level at all times.
2. Pottering and moving about in one room: large spike in ambient noise.
Altering the depth of the audio playback.
In this scenario, the user is moving around the kitchen and filling up the dishwasher whilst they listen to the radio. As the dishwasher kicks in and there is a large spike in ambient noise, the radio alters the depth of the audio playback to foreground the most important element (eg the actors voices).
3. Settled down to properly listen to the radio: lowered light levels.
Adjusting the EQ levels.
In this scenario, the user is settling down to a cosy evening listening to the radio. They stoke their fire and turn off bright lights in the room. The change in light levels in the room causes the radio to adjust the EQ levels, cutting most of the treble, to make a more relaxing listening experience.
4. Settled down to properly listen to the radio: large spike in ambient noise, movement.
Pause the playback.
In this scenario, the user is settled down to a cosy evening listening to the radio before being interrupted by phone call. The large spike in noise coupled with the user”s movement away from the radio, causes the radio to pause playback of the audio.
The Technical Approach
As an object, the Perceptive Radio is a bit of a trick on the user – it looks dumb, but is pretty powerful. The broad remit for the technical and object-design side was that it needed to be “small enough to fit in a box, fast enough to use Web Audio API” whilst processing data feeds from analogue electronics.
Working with Adrian McEwen from MCQN LTD in Liverpool to successfully put the internet into a thing, our first hope for running Breaking Out was a smartphone working with a Raspberry Pi and Arduino. Unfortunately, the Perceptive Media radioplay does quite a lot of heavy lifting – including Web Audio API and performing real-time object-based processing (putting a load of reverb on the text-to-voice synth) – which requires a good quality browser such as Chrome and a decent strength computer.
This kind of processing power is sadly not available in small things and – after testing Breaking Out on Android/iOS mobile devices, Netbooks, Chromebooks and an array of Raspberry Pi – we decided to build a compact Mini-ITX system (a 7″ PC) to power the Perceptive Radio. Using a Beaglebone board for the analogue electronics hub, we had a slightly larger than desired technical solution with enough power to process Breaking Out and analogue inputs comfortably.
The Radio object itself needed to accommodate a 7″ PC system, sensitive analogue electronics sensors and have capacity to add further sensors/functions later down the line and – crucially – look natural in a home. Early thoughts around the box design included a classic boombox-style affair, a tall Cathedral design and a box.
All had potential, but were never quite right – all either too bulky, odd or anachronistic. We brought Patrick Fenner on board – an “open source engineer” who makes 2D things 3D – to help “make the box”.
Patrick”s experience and speciality in rapid prototyping with lasercut materials took the direction towards a more Digital Radio design, using finger jointed edges, birch plywood and an acrylic fascia:
The addition of chrome suitcase corners and a carry handle are there for extra structure support and to enable Ian to carry the box comfortably on his travels to demonstrate it.
Our Creative Director Matt had an open studio event at his artist studio.
He demonstrated an experimental work of sculptural shapes that inflate and deflate depending on the sound levels in the room. It went down well with the kids who visited the exhibition. We’ve never seen so many children shouting at inanimate objects.