System Persona

Ben Bashford’s writing about ‘Emoticomp‘ – the practicalities of working as a designer of objects and systems that have behaviour and perhaps ‘ intelligence’ built-into them.

It touches on stuff I’ve talked/written about here and over on the BERG blog – but moves out of speculation and theory to the foothills of the future: being a jobbing designer working on this stuff, and how one might attack such problems.

Excellent.

I really think we should be working on developing new tools for doing this. One idea I’ve had is system/object personas. Interaction designers are used to using personas (research based user archetypes) to describe the types of people that will use the thing they’re designing – their background, their needs and the like but I’m not sure if we’ve ever really explored the use of personas or character documentation to describe the product themselves. What does the object want? How does it feel about it? If it can sense its location and conditions how could that affect its behaviour? This kind of thing could be incredibly powerful and would allow us to develop principles for creating the finer details of the object’s behaviour.

I’ve used a system persona before while designing a website for young photographers. The way we developed it was through focus groups with potential users to establish the personality traits of people they felt closest to, trusted and would turn to for guidance. This research helped is establish the facets of a personality statement that influenced the tone of the copy at certain points along the user journeys and helped the messaging form a coherent whole. It was useful at the time but I genuinely believe this approach can be adapted and extended further.

I think you could develop a persona for every touchpoint of the connected object’s service. Maybe it could be the same persona if the thing is to feel strong and omnipresent but maybe you could use different personas for each touchpoint if you’re trying to bring out the connectedness of everything at a slightly more human level. This all sounds a bit like strategy or planning doesn’t it? A bit like brand principles. We probably need to talk to those guys a bit more too.

“He has to make what he is thinking in order to express it.”

Charles Holland at Fantastic Journal with a brilliant assessment of CE3K and, for that matter – design and material exploration:

“The film is obsessed with issues of representation and non-verbal communication. The famous five-note score that the scientists use to communicate with the aliens, for example, effectively replaces speech. The chief scientist is a Frenchman (played by film director François Truffaut) who makes no more than one or two gnomic utterances and is accompanied throughout the film by an ineffectual translator. The fact that none of the Americans can understand him seems to imbue him with some special understanding of what is going on.

Roy can’t communicate his obsession through conventional language and is forced into non-verbal communication. He has to make what he is thinking in order to express it. And he’s not alone in his obsession. Another character – Gillian Guiler – is also obsessed with Devil’s Tower. She draws it over and over again. In a brilliant scene the two of them converge on Devil’s Tower aware that it’s the location for the alien spaceship’s landing. Trying to work out how to scale the mountain Roy reveals that his knowledge of its topography is vastly superior to Gillian’s. “You should try sculpture next time”, he deadpans.”

PaperCamp prototyped…

Can I have a … ?, originally uploaded by straup.

This Saturday saw the first-ever PaperCamp successully prototyped.

After an amount of last-minute panic, I think I stopped being stressed-out about 5 minutes into Aaron’s talk.

Instead I started to become delighted and fascinated by the strange, wonderful directions people are taking paper, printing and prototyping the lightweight, cheap connection of the digital and the physical.

Jeremy Keith did a wonderful job of liveblogging the event, and there is a growing pool of pictures in the papercamp group on flickr.

Highlights for me included the gusto that the group gave to making things with paper in a frenetic 10min session hosted by Alex of Tinker.it, Karsten‘s bioinformatic-origami-unicorn proposal, and the delightful work of Sawa Tanaka.

Also, the fact that we’ve made Craft Bioinformatic Origami Unicorns a tag on flickr has to be seen as a ‘win’ in my view.

Lots of people didn’t hear about this one as I was deliberately trying to keep it a small ‘prototype’, and also we were luckily operating as a ‘fringe’ event to the Bookcamp event that had been set up by Russell, Jeremy and James and didn’t want to take the mickey too much (thanks guys) – so apologies to those who didn’t make it.

But, the enthusiastic response means we’ll definitely be doing this again, as a bigger, open, stand-alone event, maybe in the summer, with more space, more attendees and hopefully more heavy-duty printing and papermaking activities.

The next PaperCamp is going to be in NYC in early Feb, and I hear noises there maybe one gestating in San Francisco also…

Stay tuned, paperfans…

Picnic08: Internet Of Things

I missed quite a lot of Picnic, mainly due to getting together with the Dopplr team for a rare physical pow-wow – but I did manage to spend a good chunk of the Friday in the Internet of Things special session.

Speakers included Rafi Haladjian of Violet/Nabaztag fame and David Orban of Widetag/OpenSpime, and there were demos from Tikitag, and Pachube (Usman Haque‘s excellent new venture).

Sat in the audience was God-Emperor of Spime, Bruce Sterling which lent it an extra something. I managed to snag a Tikitag start kit, which I hope to have a play with this week – I’ll post some unboxing pics when I have chance.

It was one of those sessions where the palpable sense of the scenius is the thing, rather than the content so much (although there was a lot of good stuff in there too) – I came away with renewed enthusiasm for ‘practical ubicomp’ and all things spime-y.

I wasn’t sure whether the talks where being video’d, so I managed to record two of the speakers on my N95, so the quality of the audio isn’t particularly great.

So, with that disclaimer, here are the presentations by Matt Cottam of Tellart and Mike Kuniavsky of ThingM.

Jessica Helfand and Tony Stark vs. Don Norman, Paul Dourish and Joss Wheedon

Jessica Helfand of Design Observer on Iron Man’s user-interfaces as part of the dramatis personae:

“…in Iron Man, vision becomes reality through the subtlest of physical gestures: interfaces swirl and lights flash, keyboards are projected into the air, and two-dimensional ideas are instantaneously rendered as three and even four-dimensional realities. Such brilliant optical trickery is made all the more fantastic because it all moves so quickly and effortlessly across the screen. As robotic renderings gravitate from points of light in space into a tangible, physical presence, the overall effect merges screen-based, visual language with a deftly woven kind of theatrical illusion.”

Made me think back to a post I wrote here about three years ago, on “invisible computing” in Joss Wheedon’s “Firefly”.

Firefly touch table

“…one notices that the UI doesn’t get in the way of the action, the flow of the interactions between the bad guy and the captain. Also, there is a general improvement in the quality of the space it seems ? when there are no obtrusive vertical screens in line-of-sight to sap the attention of those within it.”

Firefly touch table

Instead of the Iron Man/Minority Report approach of making the gestural UI the star of the sequence, this is more interesting – a natural computing interface supports the storytelling – perhaps reminding the audience where the action is…

As Jessica points out in her post, it took us some years for email to move from 3D-rendered winged envelopes, to things that audiences had common experience and expectations of.

Three years on from Firefly, most of the audience watching scifi and action movies or tv will have seen or experienced a Nintendo Wii or an iPhone, and so some of the work of moving technology from star to set-dressing is done – no more outlandish or exotic as a setting for exposition than a whiteboard or map-table.

Having said that – we’re still in tangible UIs transition to the mainstream.

A fleeting shot from the new Bond trailer seems to indicate there’s still work for the conceptual UI artist, but surely this now is the sort of thing that really has a procurement number in the MI6 office supplies intranet…

Bond Touch table

And – it’s good to see Wheedon still pursuing tangible, gestural interfaces in his work…

Anaesthesia and embodied interaction

Went to the dentist last friday.

I fully comply with the rest-of-the-world’s view of the British relationship with the dental arts, and am completely terrified of going to the little room with the cup of pink rinse.

I asked for recommendations from friends for a dentist who specialised in making people who hadn’t been to the dentist in… a long time… feel more relaxed and happy about the experience.

Mr. Webb told me about his dentist, Dr.Bashar Al-Naher who uses a combination of mild anaesthetic and NLP to induce relaxation and a feeling of security in his patients.

I’ve been lucky enough never to have to have surgery or be in another situation where anaesthesia was employed, so this was a novel experience for me.

Once I’d reached the state of both local and mild general anaesthesia, I had a curious feeling of distance from my body.

I felt as if my conscious mind (in which I seemed together enough to start dissecting the experience) was ‘up on a balcony’ somewhere in my head. I had a distinct feeling that I had retreated to an observation gallery, compartmentalised from my body itself, and even the lower part of my head/face where the action was.

Whilst feeling removed from ‘where the action was’, I started reflecting on ‘Where the action is’, and my previous work with Chris on embodied interaction. I even started thinking about writing this post.

Sometime during this, a small daemon system running somewhere sidled into the balcony where “I” was, and started fretting about all the dissection of the experience I was doing – perhaps fearing the degree of conscious thought going on would let the body (and the pain) in through the back door.

I went back to (un)concentrating on my breathing, and the visualisation that Dr. Al-Naher was leading me through. Happy again, I let the drilling and filling continue…

After the work had been done and I was coming out of the state of anaesthesia I was talking with the dentist, probably quite slowly and deliberately – but definitely ‘back in the room’.

There was a moment where I was aware that my foot was in an uncomfortable or precarious position. Most of the time we wouldn’t give this a microsecond’s conscious thought, and we would just effortlessly readjust the position of our foot.

I felt I had to send a discrete set of instructions down my body to my foot, almost like Flesh-Logo in order to move it.

Of course there are all sorts of flaws with this interpretation, but the temporary compartmentalising of ‘body’ and ‘mind’ that I felt just reinforced the fact that most of the time there is no separation at all.

The experience (apart from making my teeth better) has left me with real conviction the train of thought in Paul Dourish’s book – about the power of embodied interaction to improve our interfaces with technology.

And also, of course, how good my new dentist is – but he probably mindhacked me to say that…

Advice for bloggers, podcasters – and conference speakers

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }

As seen on the Heathrow Express this morning.

I’m in Lübeck, Germany for the intriguingly-titled “Cognitive Design” conference, where Doors of Perception’s John Thackara will be giving the keynote tomorrow morning.>

I’m going to be giving a Post-Nintendo-Revolution remix of the talk Chris and I gave at Etech this year, with plenty of play in there.

In moving back to the UK, I forgot to bring any of the Nokia NFC phones with me. So unfortunately I won’t be able to demo them which is a pity – I’ll have to wave my arms about twice as much as usual.

There’s also a keynote by August de los Reyes of MSN / Microsoft on “The future of the PC” – which should be interesting given the very recent reorganisation of MSFT.

No screens = “Serenity”

One upside of being down for the count over a long weekend is that there’s no guilt in eating an entire boxed set of TV all at once.

I sat down (well, lied down) to take in Joss Whedon’s aborted cowboy space-opera, Firefly; and was pleasantly surprised.

It’s no wonder it was cancelled – it takes ages to get going, it’s got a huge cast each of whom “have a secret” and some of the best lines are in Mandarin it seems.

One thing that did strike me about a couple of episodes was how very few ‘screens’ feature in Firefly’s vision of the future – and in general how tangible and situated digital technology seems in that universe.
Read More »

The Claaaw!!! I mean… THE CLAAMMM!!!!

Andrew Losowsky has posted the full, unedited version of an article he wrote for The Guardian last year on Eyetoy and embodied interaction, including comment from Ludology.org’s estimable Gonzalo Frasca and an interview with Richard Marks – who pioneered the tech behind Eyetoy for Sony – and what he’s doing next: The Clam…

Some excepts:

"EyeToy relies on the most basic interface ever invented – the human
body. Graphics may get photo realistic, but there’s nothing real about
bashing X to run faster, or clicking the mouse to jump. If your
character needs to run faster, run faster. If it needs to jump, jump.
The interface gap is suddenly made all but irrelevant. Look at the
screen. You see you? That’s you, that is.
"

and… The Clam!

"The Clam a single U-shaped squeezable piece of fabric you put in
your hand and when you squeeze it, it changes in aspect ratio very
fast. So you can use it as a mouse cursor. You squeeze to click and
drag, and then let go to release. Because you can monitor the direction
of the aspect ratio, you can also use it to rotate objects."

"His lab has developed a simple photo storage/manipulation program to
use The Clam with – and it works so simply, it seems almost too
obvious."

And finally:

"Touch is one of Sony’s four Interface Research Areas (the others
being Inertial, Video and Audio – the EyeToy has an in-built
microphone, by the way). Tilt-based gaming, through handheld games such
as Wario Ware, are also becoming successful. And there’s potentially
much more within our grasp.

"If you look at mobile phones now," says Ron Festajo, "practically
every one has a camera. You can take photos and use it as an input
device. It’s very exciting."

People understand cameras. And cameras open up all kinds of possibilities. The revolution is already upon us, comrades."

A great article, and a good intro to the already-happening-ness fun of tangible computing.