Chrome Dreams II

Season 4 of “For All Mankind” just started on Apple TV.

I imagine the venn set of folks who still read this and watch is largely overlapped, but for those who don’t it’s a alt-history of the 20thC/21stC space race on a divergent world-line where the USSR got to the moon first. As with most things prestige-tv, that becomes a setting for a soap opera, but hey.

But – the above moment, where one of the main characters communicates from Mars to their daughter was notable for the proustian rush it gave.

A perfect period piece of UI in production design – the brushed chrome and aqua buttons of the early aughts.

To have that kind of historical reference point from a field so profoundly (proudly?) ahistorical as design for technology is a shock.

Partner / Tool / Canvas: UI for AI Image Generators

“Howl’s Moving Castle, with Solar Panels” – using Stable Diffusion / DreamStudio LIte

Like a lot of folks, I’ve been messing about with the various AI image generators as they open up.

While at Google I got to play with language model work quite a bit, and we worked on a series of projects looking at AI tools as ‘thought partners’ – but mainly in the space of language with some multimodal components.

As a result perhaps – the things I find myself curious about are not so much the models or the outputs – but the interfaces to these generator systems and the way they might inspire different creative processes.

For instance – Midjourney operates through a discord chat interface – reinforcing perhaps the notion that there is a personage at the other end crafting these things and sending them back to you in a chat. I found a turn-taking dynamic underlines play and iteration – creating an initially addictive experience despite the clunkyness of the UI. It feels like an infinite game. You’re also exposed (whether you like it or not…) to what others are producing – and the prompts they are using to do so.

Dall-e and Stable Diffusion via Dreamstudio have more of a ‘traditional’ tool UI, with a canvas where the prompt is rendered, that the user can tweak with various settings and sliders. It feels (to me) less open-ended – but more tunable, more open to ‘mastery’ as a useful tool.

All three to varying extents resurface prompts and output from fellow users – creating a ‘view-source’ loop for newbies and dilettantes like me.

Gerard Serra – who we were lucky to host as an intern while I was at Google AIUX – has been working on perhaps another possibility for ‘co-working with AI’.

While this is back in the realm of LLMs and language rather than image generation, I am a fan of the approach: creating a shared canvas that humans and AI co-work on. How might this extend to image generator UI?

Conversation with Google ATAP / Shaping Things 2.0 vs Web3.0

Was interviewed recently by Anastasiia Mozghova for Google ATAP’s twitter feed, where it’s being published as a thread in two parts (part one here).

It centred around ambient computing and work I’ve been involved in around that subject in the past – particularly looking at notions of ‘social grace’ in computing.

This was a concept we discussed a lot in relation to Soli, the radar sensor that Google ATAP invented. I was involved in that project a little at Creative Lab, then more peripherally still (as kind of a cheerleader if anything) at Google Research.

That concept of social grace, for me, begins in the writing of Mark Weiser and continues amplified in Adam Greenfield‘s “Everywhere” – which still has some very relevant nuggets in it, IMHO, for a book on technology written in 2006 (16 years ago at time of writing!)

I also mention another classic – which is fast coming up on it’s 20th anniversary (next year?) – Bruce Sterling’s Shaping Things.

Its eco-centric vision of ‘spimes‘ rhymes (sorry) with the current hype around web3, tokenisation, etc etc – but while that is obsessed with financialisation, Shaping Things instead hints at using data sousveillance means toward resource responsibility, circularity and accountability ends in design and manufacture.

Time for a second edition?

The beginning of BotWorld



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

Jessica Helfand and Tony Stark vs. Don Norman, Paul Dourish and Joss Wheedon

Jessica Helfand of Design Observer on Iron Man’s user-interfaces as part of the dramatis personae:

“…in Iron Man, vision becomes reality through the subtlest of physical gestures: interfaces swirl and lights flash, keyboards are projected into the air, and two-dimensional ideas are instantaneously rendered as three and even four-dimensional realities. Such brilliant optical trickery is made all the more fantastic because it all moves so quickly and effortlessly across the screen. As robotic renderings gravitate from points of light in space into a tangible, physical presence, the overall effect merges screen-based, visual language with a deftly woven kind of theatrical illusion.”

Made me think back to a post I wrote here about three years ago, on “invisible computing” in Joss Wheedon’s “Firefly”.

Firefly touch table

“…one notices that the UI doesn’t get in the way of the action, the flow of the interactions between the bad guy and the captain. Also, there is a general improvement in the quality of the space it seems ? when there are no obtrusive vertical screens in line-of-sight to sap the attention of those within it.”

Firefly touch table

Instead of the Iron Man/Minority Report approach of making the gestural UI the star of the sequence, this is more interesting – a natural computing interface supports the storytelling – perhaps reminding the audience where the action is…

As Jessica points out in her post, it took us some years for email to move from 3D-rendered winged envelopes, to things that audiences had common experience and expectations of.

Three years on from Firefly, most of the audience watching scifi and action movies or tv will have seen or experienced a Nintendo Wii or an iPhone, and so some of the work of moving technology from star to set-dressing is done – no more outlandish or exotic as a setting for exposition than a whiteboard or map-table.

Having said that – we’re still in tangible UIs transition to the mainstream.

A fleeting shot from the new Bond trailer seems to indicate there’s still work for the conceptual UI artist, but surely this now is the sort of thing that really has a procurement number in the MI6 office supplies intranet…

Bond Touch table

And – it’s good to see Wheedon still pursuing tangible, gestural interfaces in his work…

T5: Expectations of agency

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }

I’m in Oslo for a few days, and to get there I went through Heathrow’s new and controversial Terminal Five. After all the stories, and Ryan’s talk on the service design snafus it’s experienced I approached my visit there with excitement and trepidation.
Excitement still, because it’s still a major piece of architecture by Richard Rogers and Partners – and sparkly new airports are, well, sparkly and new.
YMMV, especially as we were travelling off-peak, but – it was pretty calm and smooth sailing all the way. I’m guessing they’ve pulled out all the stops in order to get things on an even-keel.
Saw both pieces installed in the BA Club Lounges by Trokia (‘Cloud’ and ‘All the time in the world’), both of which were lovely – you can get to see them both without having to be a fancypants gold carder, which is good.

The thing that struck me though was the degree of technological automation of previously human-mediated process that were anticipated, designed and built – that then had to be retrofitted with human intervention and signage.
It’s a John Thackara rant waiting to happen, and that’s aside from all the environmental impacts he might comment on!

My favourite was the above sign added to the lifts that stop and start automatically, to make sure you understand that you can’t press anything. Of course, we’re trained to expect agency or at least the simulation of agency in lifts – keeping doors open, selecting floors, pressing our floor button impatiently and tutting to make the lift go faster. Remember that piece in James Gleick’s FSTR where lift engineers deliberately design placebo button presses to keep us impatient humans happy? People still kept pressing the type panels – me included!
To paraphrase Naoto Fukasawa: sometimes design dissolves in behaviour and then quickly sublimates into hastily-printed and laminated signage…

Glanceable Pored-Over

I’ve been catching up on the internets after a long roadtrip over Christmas.

Two images from the ever-excellent infosthetics.com made me think that the best interaction and information design is stuff that can be glanced-at:

To be glanced-at

or pored-over:

To be pored-over

but unfortunately, most commercial interaction design falls between these two stools, in the ‘don’t make me think’ category.

I’d like to create services that scamper between beautiful extremes in 2008…

Lost futures: Unconscious gestures?

Lamenting lost futures is not that productive, but it doesn’t stop me enjoying it. Whether it’s the pleasure of reading Ellis’s “Ministry of Space” and thinking “what if?” or looking through popculture futures past as in this Guardian article – it’s generally a sentimental, but thought-provoking activity.

Recently, though, I’ve been thinking about a temporarily lost future that’s closer to home in the realm of mobile UI design. That’s the future that’s been perhaps temporarily lost in the wake of the iPhone’s arrival.

A couple of caveats.

Up until June this year. I worked at Nokia in team that created prototype UIs for the Nseries devices, so this could be interpreted as sour-grapes, I suppose.. but I own an iPodTouch, that uses the same UI/OS more-or-less, and love it.

I spoke at SkillSwap Bristol in September (thanks to Laura for the invite) and up until the day I was travelling to Bristol, I didn’t know what I was going to say, but I’d been banging on at people in the pub (esp. Mr. Coates) about the iPhone’s possible impact on interface culture, so I thought I’d put together some of those half-formed thoughts for the evening’s debate.

The slides are on Slideshare
(no notes, yet) but the basic riff was that the iPhone is a beautiful, seductive but jealous mistress that craves your attention, and enslaves you to its jaw-dropping gorgeousness at the expense of the world around you.

skillswap250907

This, of course, is not entirely true – but it makes for a good starting point for an argument! Of course, nearly all our mobile electronic gewgaws serve in some small way or other to take us away from the here and now.

But the flowing experience just beyond Johnny Ive’s proscenium chrome does have a hold more powerful than perhaps we’ve seen before. Not only over users, but over those deciding product roadmaps. We’re going to see a lot of attempts to vault the bar that Apple have undoubtedly raised.

Which, personally, I think is kind-of-a-shame.

First – a (slightly-bitter) side-note on the Touch UI peanut gallery.

In recent months we’ve seen Nokia and Sony Ericsson show demos of their touch UIs. To which the response on many tech blogs has been “It’s a copy of the iPhone”. In fact, even a Nokia executive responded that they had ‘copied with pride’.

That last remark made me spit with anger – and I almost posted something very intemperate as a result. The work that all the teams within Nokia had put into developing touch UI got discounted, just like that, with a half-thought-through response in a press conference. I wish that huge software engineering outfits like S60 could move fast enough to ‘copy with pride’.

Sheesh.

Fact-of-the-matter is if you have roughly the same component pipeline, and you’re designing an interface used on-the-go by (human) fingers, you’re going to end up with a lot of the same UI principles.

But Apple executed first, and beautifully, and they win. They own it, culturally.

Thus ends the (slightly-bitter) side-note – back to the lost future.

Back in 2005, Chris and myself gave a talk at O’Reilly Etech based on the work we were doing on RFID and tangible, embodied interactions, with Janne Jalkanen and heavily influenced by the thinking of Paul Dourish in his book “Where the action is”, where he advances his argument for ’embodied interaction’:

“By embodiment, I don’t mean simply physical reality, but rather, the way that physical and social phenomena unfold in real time and real space as a part of the world in which we are situated, right alongside and around us.”

I was strongly convinced that this was a direction that could take us down a new path from recreating desktop computer UIs on smaller and smaller surfaces, and create an alternative future for mobile interaction design that would be more about ‘being in the world’ than being in the screen.

That seems very far away from here – and although development in sensors and other enablers continues, and efforts such as the interactive gestures wiki are inspiring – it’s likely that we’re locked into pursuing very conscious, very gorgeous, deliberate touch interfaces – touch-as-manipulate-objects-on-screen rather than touch-as-manipulate-objects-in-the-world for now.

But, to close, back to Nokia’s S60 touch plans.

Tom spotted it first. In their (fairly-cheesy) video demo, there’s a flash of something wonderful.

Away from the standard finger and stylus touch stuff there’s a moment where a girl is talking to a guy – and doesn’t break eye contact, doesn’t lose the thread of conversation; just flips her phone over to silence and reject a call. Without a thought.

Being in the world: s60 edition from blackbeltjones on Vimeo.

As Dourish would have it:

“interacting in the world, participating in it and acting through it, in the absorbed and unreflective manner of normal experience.”

I hope there’s a future in that.