“Magic notebooks, not magic girlfriends”

The take-o-sphere is awash with responses to last week’s WWDC, and the announcement of “Apple Intelligence”.

My old friend and colleague Matt Webb’s is one of my favourites, needless to say – and I’m keen to try it, naturally.

I could bang on about it of course, but I won’t – because I guess I have already.

Of course, the concept is the easy bit.

Having a trillion-dollar corporation actually then make it, especially when it’s counter to their existing business model is another thing.

I’ll just leave this here from about 6 years ago…

BUT!

What I do want to talk about is the iPad calculator announcement that preceded the broader AI news.

As a fan of Bret Victor, this made me very happy.

As a fan of Seymour Papert it made me very happy.

As a fan of Alan Kay and the original vision of the Dynabook is made me very happy.

But moreover – as someone who has never been that excited by the chatbot/voice obsessions of BigTech, it was wonderful to see.

Of course the proof of this pudding will be in the using, but the notion of a real-time magic notebook where the medium is an intelligent canvas responding as an ‘intelligence amplifier‘ is much more exciting to me than most of the currently hyped visions of generative AI.

I was particularly intrigued to see the more diagrammatic example below, which seemed to belong in the conceptual space between Bret Victor’s Dynamicland and Papert’s Mathland.

I recall when I read Papert’s “Mindstorms” (back in 2012 it seems? ) I got retroactively angry about how I had been taught mathematics.

The ideas he advances for learning maths through play, embodiment and experimentation made me sad that I had not had the chance to experience the subject through those lenses, but instead through rote learning leading to my rejection of it until much later in life.

As he says “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful.”

Perhaps most famously he writes:

“a computer can be generalized to a view of learning mathematics in “Mathland”; that is to say, in a context which is to learning mathematics what living in France is to learning French.”

Play, embodiment, experimentation – supported by AI – not *done* for you by AI.

I mean, I’m clearly biased.

I’ve long thought the assistant model should be considered harmful. Perhaps the Apple approach announced at WWDC means it might not be the only game in town for much longer.

Back at Google I was pursuing concepts of Personal AI with something called Project Lyra, which perhaps one day I can go into a bit more deeply.

Anyway.

Early on Jess Holbrook turned me onto the work of Professor Andy Clark, and I thought I’d try and get to work with him on this.

My first email to him had the subject line of this blog post: “Magic notebooks, not magic girlfriends” – which I think must have intrigued him enough to respond.

This, in turn, led to the fantastic experience of meeting up with him a few times while he was based in Edinburgh and having him write a series of brilliant pieces (for internal consumption only, sadly) on what truly personal AI might mean through his lens of cognitive science and philosophy.

As a tease here’s an appropriate snippet from one of Professor Clark’s essays:

“The idea here (the practical core of many somewhat exotic debates over the ‘extended mind’) is that considered as thinking systems, we humans already are, and will increasingly become, swirling nested ecologies whose boundaries are somewhat fuzzy and shifting. That’s arguably the human condition as it has been for much of our recent history—at least since the emergence of speech and the collaborative construction of complex external symbolic environments involving text and graphics. But emerging technologies—especially personal AI’s—open up new, potentially ever- more-intimate, ways of being cognitively extended.”

I think that’s what I object to, or at least recoil from in the ‘assistant’ model – we’re abandoning exploring loads of really rich, playful ways in which we already think with technology.

Drawing, model making, acting things out in embodied ways.

Back to Papert’s Mindstorms:

“My interest is in the process of invention of “objects-to-think-with,” objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

“…I am interested in stimulating a major change in how things can be. The bottom line for such changes is political. What is happening now is an empirical question. What can happen is a technical question. But what will happen is a political question, depending on social choices.”

The some-what lost futures of Kay, Victor and Papert are now technically realisable.

“what will happen is a political question, depending on social choices.”

The business model is the grid, again.

That is, Apple are toolmakers, at heart – and personal device sellers at the bottom line. They don’t need to maximise attention or capture you as a rent (mostly). That makes personal AI as a ‘thing’ that can be sold much more of viable choice for them of course.

Apple are far freer, well-placed (and of coursse well-resourced) to make “objects-to-think-with, objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

The wider strategy of “Apple Intelligence” appears to be just that.

But – my hope is the ‘magic notebook’ stance in the new iPad calculator represents the start of exploration in a wider, richer set of choices in how we interact with AI systems.

Let’s see.

Chrome Dreams II

Season 4 of “For All Mankind” just started on Apple TV.

I imagine the venn set of folks who still read this and watch is largely overlapped, but for those who don’t it’s a alt-history of the 20thC/21stC space race on a divergent world-line where the USSR got to the moon first. As with most things prestige-tv, that becomes a setting for a soap opera, but hey.

But – the above moment, where one of the main characters communicates from Mars to their daughter was notable for the proustian rush it gave.

A perfect period piece of UI in production design – the brushed chrome and aqua buttons of the early aughts.

To have that kind of historical reference point from a field so profoundly (proudly?) ahistorical as design for technology is a shock.

Partner / Tool / Canvas: UI for AI Image Generators

“Howl’s Moving Castle, with Solar Panels” – using Stable Diffusion / DreamStudio LIte

Like a lot of folks, I’ve been messing about with the various AI image generators as they open up.

While at Google I got to play with language model work quite a bit, and we worked on a series of projects looking at AI tools as ‘thought partners’ – but mainly in the space of language with some multimodal components.

As a result perhaps – the things I find myself curious about are not so much the models or the outputs – but the interfaces to these generator systems and the way they might inspire different creative processes.

For instance – Midjourney operates through a discord chat interface – reinforcing perhaps the notion that there is a personage at the other end crafting these things and sending them back to you in a chat. I found a turn-taking dynamic underlines play and iteration – creating an initially addictive experience despite the clunkyness of the UI. It feels like an infinite game. You’re also exposed (whether you like it or not…) to what others are producing – and the prompts they are using to do so.

Dall-e and Stable Diffusion via Dreamstudio have more of a ‘traditional’ tool UI, with a canvas where the prompt is rendered, that the user can tweak with various settings and sliders. It feels (to me) less open-ended – but more tunable, more open to ‘mastery’ as a useful tool.

All three to varying extents resurface prompts and output from fellow users – creating a ‘view-source’ loop for newbies and dilettantes like me.

Gerard Serra – who we were lucky to host as an intern while I was at Google AIUX – has been working on perhaps another possibility for ‘co-working with AI’.

While this is back in the realm of LLMs and language rather than image generation, I am a fan of the approach: creating a shared canvas that humans and AI co-work on. How might this extend to image generator UI?

Conversation with Google ATAP / Shaping Things 2.0 vs Web3.0

Was interviewed recently by Anastasiia Mozghova for Google ATAP’s twitter feed, where it’s being published as a thread in two parts (part one here).

It centred around ambient computing and work I’ve been involved in around that subject in the past – particularly looking at notions of ‘social grace’ in computing.

This was a concept we discussed a lot in relation to Soli, the radar sensor that Google ATAP invented. I was involved in that project a little at Creative Lab, then more peripherally still (as kind of a cheerleader if anything) at Google Research.

That concept of social grace, for me, begins in the writing of Mark Weiser and continues amplified in Adam Greenfield‘s “Everywhere” – which still has some very relevant nuggets in it, IMHO, for a book on technology written in 2006 (16 years ago at time of writing!)

I also mention another classic – which is fast coming up on it’s 20th anniversary (next year?) – Bruce Sterling’s Shaping Things.

Its eco-centric vision of ‘spimes‘ rhymes (sorry) with the current hype around web3, tokenisation, etc etc – but while that is obsessed with financialisation, Shaping Things instead hints at using data sousveillance means toward resource responsibility, circularity and accountability ends in design and manufacture.

Time for a second edition?

The beginning of BotWorld



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

Jessica Helfand and Tony Stark vs. Don Norman, Paul Dourish and Joss Wheedon

Jessica Helfand of Design Observer on Iron Man’s user-interfaces as part of the dramatis personae:

“…in Iron Man, vision becomes reality through the subtlest of physical gestures: interfaces swirl and lights flash, keyboards are projected into the air, and two-dimensional ideas are instantaneously rendered as three and even four-dimensional realities. Such brilliant optical trickery is made all the more fantastic because it all moves so quickly and effortlessly across the screen. As robotic renderings gravitate from points of light in space into a tangible, physical presence, the overall effect merges screen-based, visual language with a deftly woven kind of theatrical illusion.”

Made me think back to a post I wrote here about three years ago, on “invisible computing” in Joss Wheedon’s “Firefly”.

Firefly touch table

“…one notices that the UI doesn’t get in the way of the action, the flow of the interactions between the bad guy and the captain. Also, there is a general improvement in the quality of the space it seems ? when there are no obtrusive vertical screens in line-of-sight to sap the attention of those within it.”

Firefly touch table

Instead of the Iron Man/Minority Report approach of making the gestural UI the star of the sequence, this is more interesting – a natural computing interface supports the storytelling – perhaps reminding the audience where the action is…

As Jessica points out in her post, it took us some years for email to move from 3D-rendered winged envelopes, to things that audiences had common experience and expectations of.

Three years on from Firefly, most of the audience watching scifi and action movies or tv will have seen or experienced a Nintendo Wii or an iPhone, and so some of the work of moving technology from star to set-dressing is done – no more outlandish or exotic as a setting for exposition than a whiteboard or map-table.

Having said that – we’re still in tangible UIs transition to the mainstream.

A fleeting shot from the new Bond trailer seems to indicate there’s still work for the conceptual UI artist, but surely this now is the sort of thing that really has a procurement number in the MI6 office supplies intranet…

Bond Touch table

And – it’s good to see Wheedon still pursuing tangible, gestural interfaces in his work…

T5: Expectations of agency

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }

I’m in Oslo for a few days, and to get there I went through Heathrow’s new and controversial Terminal Five. After all the stories, and Ryan’s talk on the service design snafus it’s experienced I approached my visit there with excitement and trepidation.
Excitement still, because it’s still a major piece of architecture by Richard Rogers and Partners – and sparkly new airports are, well, sparkly and new.
YMMV, especially as we were travelling off-peak, but – it was pretty calm and smooth sailing all the way. I’m guessing they’ve pulled out all the stops in order to get things on an even-keel.
Saw both pieces installed in the BA Club Lounges by Trokia (‘Cloud’ and ‘All the time in the world’), both of which were lovely – you can get to see them both without having to be a fancypants gold carder, which is good.

The thing that struck me though was the degree of technological automation of previously human-mediated process that were anticipated, designed and built – that then had to be retrofitted with human intervention and signage.
It’s a John Thackara rant waiting to happen, and that’s aside from all the environmental impacts he might comment on!

My favourite was the above sign added to the lifts that stop and start automatically, to make sure you understand that you can’t press anything. Of course, we’re trained to expect agency or at least the simulation of agency in lifts – keeping doors open, selecting floors, pressing our floor button impatiently and tutting to make the lift go faster. Remember that piece in James Gleick’s FSTR where lift engineers deliberately design placebo button presses to keep us impatient humans happy? People still kept pressing the type panels – me included!
To paraphrase Naoto Fukasawa: sometimes design dissolves in behaviour and then quickly sublimates into hastily-printed and laminated signage…

Glanceable Pored-Over

I’ve been catching up on the internets after a long roadtrip over Christmas.

Two images from the ever-excellent infosthetics.com made me think that the best interaction and information design is stuff that can be glanced-at:

To be glanced-at

or pored-over:

To be pored-over

but unfortunately, most commercial interaction design falls between these two stools, in the ‘don’t make me think’ category.

I’d like to create services that scamper between beautiful extremes in 2008…