Gas Town and Bullet Hell

Warning: a collection of half-formed thoughts about time, screens, AI agents, and a surprisingly relevant Japanese arcade genre.


This started with a phrase in Azeem Azhar’s piece about his AI agent workflow: “wall-clock time.”

“Two Timer” clock by Industrial Facility

It’s a term of art in programming: the actual elapsed time on the clock on the wall, as opposed to CPU time or token throughput or any other measure of what the machine is doing internally.

I hadn’t come across it before, despite having spent years thinking about time and technology, and it lodged in my head.

The interesting thing there for me about AI agents isn’t just how much they can do, it’s the growing gap between the machine’s time and the human’s time.

An agent can burn through a hundred million tokens in a day. The wall-clock time for the human supervising it is the same twenty-four hours it always was.

And then the BCG/HBR AI brain fry study landed earlier this month. Workers who oversee multiple AI agents report 33% more decision fatigue, 39% more errors, and a distinctive “buzzing” sensation, a mental fog that participants struggled to name until the researchers gave them one – “Brain Fry”. 14% percent of AI-using workers report this brain fry. In marketing, it’s 26%.

Steve Yegge, who’s been building Gas Town: a multi-agent orchestrator for managing colonies of 20+ parallel AI coding agents – wrote about the same phenomenon a few weeks earlier, in a post he called “The AI Vampire.”

His framing was vivid: AI makes you 10x more productive, but the productivity comes at a cost the industry hasn’t named yet. Yegge described sudden “nap attacks”: collapsing into sleep at odd hours after long vibe-coding sessions — and observed that friends at other AI-native startups were reporting the same thing.

His image was Colin Robinson from What We Do in the Shadows: an energy vampire, sitting on your shoulder, drinking while you (it? both?) code.

The work is exhilarating and draining, simultaneously, because AI automates the easy parts and leaves you with an unbroken stream of hard decisions compressed into the same number of hours.

Both accounts are being framed, mostly, as a UX problem (better dashboards), a training problem (up-skill your people), or a management problem (set limits). All valid?

But it seems to me that something else is going on — something older and more structural — and it has to do with clocks.

Time Machine Go!

There’s a long, rich body of work about what technology does to the experience of time, and I keep coming back to it. (I’ve been circling this for a while — a talk at DxF in Utrecht back in 2009, “All the Time in the World,” about how human cultures construct time and how designers might deconstruct and reconstruct it; the grain of spacetime as a design materialantichronos and the compound nature of time; the notion of chronodynamic design.

But the brain fry study has maybe sharpened something for me.

E.P. Thompson’s “Time, Work-Discipline, and Industrial Capitalism” (1967) is the essential starting point. His argument: clock-time is not a natural given. It’s a technology, imposed by the factory system.

Pre-industrial societies worked to task-time — you milked the cow when the cow needed milking, you fished when the tide was right. The mechanical clock and the factory bell imposed a different regime: synchronised, disciplinary, abstract. And crucially, it wasn’t just imposed from above: it was internalised, through schooling, religion, print culture, until it felt like common sense.

James Carey showed how the telegraph extended this further — it could transmit time faster than a train could carry it, which is how we ended up with standardised time zones. The telegraph didn’t just speed up communication; it made wall-clock time universal. And then came the step that I think matters most for where we are now. 

About Time by David Rooney

David Rooney’s About Time traces what happened when precise, synchronised time could be distributed electrically — wired clocks in factories, schools, railway stations, town squares. The Brno electric time system of 1903 is his case study.

Once the infrastructure existed to push accurate time into every public space, clock-discipline stopped being merely an economic requirement and became a moral one.

Punctuality became a virtue. Being on time was being a good citizen, a reliable worker, a decent person. The machinery of timekeeping was internalised so completely that it ceased to look like machinery at all — it looked like character. Electric time could be exported across the industrialised world not just as coordination but as morality.

Carolyn Marvin, in When Old Technologies Were New (1988), demonstrated the same pattern from a different angle: every new medium — telephone, electric light, radio — was received as “new” precisely to the extent that it seemed to annihilate time and distance.

The rhetoric is remarkably consistent across eras.

We’ve been having the same conversation about technology conquering time for about a hundred and fifty years.

So wall-clock time — the time of schedules, meetings, train timetables — was already a technological imposition on older, bodily rhythms.

It’s not the “natural” baseline against which AI’s speed is measured. It’s just the previous generation’s machine. And — per Rooney — it’s not just a machine. It’s a machine that learned to dress up as a moral principle.

But something has shifted. 

Félix Guattari distinguished between human time and machinic time: the former mediated by clocks and institutions, the latter operating at computational speeds that exceed human perception entirely. Hartmut Rosa calls it the “shrinking of the present” — the window in which your past experience reliably predicts the future gets narrower with each acceleration. And Paul Virilio spent decades developing what he called dromology — from the Greek dromos, a racetrack — essentially a science of speed.

Dromology in the DCU: The Speed Force…

His argument was that the history of civilisation is not primarily a history of wealth or territory but of velocity: who controls the fastest, densest barrage controls the territory. Each new speed technology — the stirrup, the railway, the telegraph, the missile, the fibre-optic cable — reshapes not just logistics but perception itself.

Speed doesn’t just let you move more easily; it changes what you can see, hear, and think. Push acceleration far enough and you get what Virilio called the “aesthetics of disappearance” — things moving too fast to be perceived at all. The landscape seen from a bullet train isn’t a landscape anymore; it’s a blur. The high-frequency trade executed in microseconds isn’t a decision anymore; it’s a reflex of infrastructure.

The BCG study’s “buzzing” and “mental fog” sit right in this lineage. Railway passengers in the 1840s reported nervous exhaustion at 30mph — what doctors called “railway spine.

Schivelbusch documented how rail speed literally rewired perception: landscapes became panoramic blurs, attention fragmented, a new kind of fatigue emerged that the medical establishment had no language for. Telegraph operators developed what we’d now recognise as burnout. The body protesting a tempo it didn’t choose.

So maybe, brain fry is the 2026 version of railway spine?

I.E. an embodied protest of a nervous system being asked to run at a tempo it didn’t evolve for.

Brain Fry & Bullet Hell

This came to mind when I was trying to describe the feeling of supervising multiple AI agents to a friend: the way you end up in a state of continuous partial attention, scanning outputs, waiting for something to go wrong, never quite able to look away and I realised the closest analogy I had was danmaku.

For those who haven’t encountered it: danmaku (弾幕, literally “bullet curtain”) is a Japanese arcade genre — sometimes called “bullet hell” — where the screen fills with hundreds of projectiles in elaborate, spiralling patterns. The player’s ship is tiny. The bullets are everywhere. The whole point is overwhelm. Games like TouhouDoDonPachiIkaruga.

Beautiful, punishing, compulsive.

I think Ikaruga was my introduction to them.

Ikaruga

In danmaku, information throughput exceeds conscious processing — you literally cannot track each bullet individually.

The BCG finding that cognitive load spikes after three AI tools describes the same saturation point: too many concurrent streams of machine-speed output for a single human to monitor serially.

Touhou

But – expert danmaku players don’t get faster. They change how they see.

They shift from focused attention (tracking individual bullets) to a kind of peripheral soft-focus — reading patterns, finding the safe channel through the barrage. It’s a perceptual shift, not a speed upgrade. And it leads, reliably, to flow states. Csikszentmihalyi’s sweet spot: challenge meets skill, self-consciousness dissolves, time distorts in the good way. Players describe it as exhilarating.

So: a human being synchronises their nervous system to machinic time, processes hundreds of parallel streams of machine-speed output, and the result is exhilaration.

Meanwhile, another human being supervises three AI agents producing parallel text outputs at roughly the same structural tempo, and the result is brain fry.

Same physics. Opposite feeling.

I think 3 things account for that gap.

First, consent. The danmaku player chooses the machine’s tempo. That’s the game — you opt in. The knowledge worker has it imposed by a productivity mandate. Thompson again: the difference between dancing and marching is who sets the beat. The factory bell and the AI agent notification are structurally identical — both impose a rhythm from outside the body. One is discipline, the other is play, depending entirely on the power relationship.

Second, legibility. Bullets are unambiguous. A bullet is a threat, a gap is safety, the feedback loop is instant and total. AI agent output requires continuous evaluative judgment — is this correct? relevant? hallucinated? — which loads a different, slower cognitive system on top of the tracking task. You’re playing bullet hell, except some of the bullets might be power-ups, but you can’t tell until you stop and read them carefully. Which rather defeats the purpose of the soft-focus.

Third, reversibility. Die in danmaku, you lose a life and restart. The stakes are emotional, not consequential. If I miss a sloppy AI output — a hallucinated fact, a wrong number, an email sent with your name on it — the damage is real, IRL. The fear of consequential failure however small prevents exactly the relaxed alertness that flow requires.

An excursion to The Bullet Farm

There’s an etymological thing here that I find quite evocative.

弾幕 — danmaku — starts as a military term.

A barrage. Suppressive fire. The purpose isn’t to hit specific targets but to make an entire zone impassable.

The word migrates to arcade games in the 1990s, where the screen becomes the impassable zone.

Then it migrates again to Niconico Douga in the 2000s, where it describes the dense scrolling comment overlays that cover the video — thousands of viewer comments streaming across simultaneously. A curtain of text.

Three instances of the same image: a barrage of projectiles, a barrage of pixels, a barrage of words.

And then (this is where it gets a bit more indulgent, but bear with me) there’s George Miller’s Fury Road.

The Bullet Farmer.

One of three warlords controlling essential resources in a post-apocalyptic economy — water, fuel, ammunition.

His power isn’t that he uses the bullets; it’s that he controls their supply. He doesn’t need to aim. He just needs to fill the zone. Dromology again: whoever controls the fastest, densest barrage controls the territory.

It’s not lost on me that Yegge named his multi-agent orchestrator after the Fury Road settlement. Gas Town — the place that refines and distributes fuel.

In Miller’s economy, Gas Town, the Bullet Farm, and the Citadel form a tripartite monopoly on the resources that make movement, violence, and survival possible.

Yegge’s Gas Town manages the fuel supply for AI coding agents — the orchestration layer that keeps the colony of twenty-plus agents running. But the Bullet Farm is maybe the bit nobody’s building yet: the thing that manages the barrage of outputs those agents produce, and the human attention required to survive it.

Think about this in relation to the AI landscape more broadly. The competitive advantage isn’t in any single agent’s output quality — it’s in the sheer volume and speed of the barrage. Flood the workspace with tools, agents, copilots. The worker, like Furiosa, has to find a path through it.

So the word carries four registers: military (suppress movement), ludic (overwhelm as play), communal (overwhelm as shared experience), and political-economic (overwhelm as resource monopoly). Each preserves the core logic — the barrage as design feature, not failure — but the human’s relationship to it changes completely depending on context.

And AI agent oversight is arguably the first context where the barrage is accidental.

Nobody designed multi-agent workflows to feel like bullet hell.

And yet.

The design problem this reveals

If brain fry is a clock problem — a temporal mismatch between human cognition and machinic speed — then solutions that only address interface design or training will help at the margins but miss the structural issue.

Just as telling 1840s railway passengers to “get used to it” didn’t prevent nervous illness.

The danmaku analogy suggests a different set of questions.

If we want AI agent work to feel more like flow and less like fry, the challenge isn’t making things faster or even slower — it’s about legibility, consent, and reversibility, and all three matter at once.

Legibility first: can agent outputs be designed to be scannable as patterns rather than read as individual documents?

Not better summaries — actual visual or structural affordances that let you soft-focus and spot the anomaly, the way a danmaku player spots the gap in the curtain.

Something closer to a radar screen than a text feed.

Then consent: can workers set their own review tempo? Asynchronous handoffs rather than real-time monitoring. What Sarah Sharma calls “temporal sovereignty” — the right to set your own pace.

The BCG data shows that AI reduces burnout when it offloads repetitive work and increases it when it demands oversight. The variable is who controls the clock.

And reversibility: can we lower the stakes of missing something?

Undo, rollback, draft-before-send, human-in-the-loop-but-not-human-as-the-loop. If the consequence of missing a bad output is catastrophic, the nervous system clenches into hypervigilance.

If it’s recoverable, the nervous system can relax into the peripheral awareness that actually works better for this kind of monitoring.

Anyone remember Braid?

Maybe there’s a hybrid of Braid and git that we need.

I keep coming back to Marvin’s insight that technologies are not fixed natural objects but “constructed complexes of habits, beliefs, and procedures embedded in elaborate cultural codes.” The temporal regime of multi-agent AI work isn’t inevitable — it’s being constructed right now, through design choices and management practices and vendor incentives and labour relations. And — this is the Rooney point again — it’s already being moralised.

Not using AI is starting to be framed as if it’s professional negligence. Not keeping up with the agents feels like a personal failing, not a structural mismatch. The Brno electric clock trick is happening again: a new tempo imposed by infrastructure, dressed up as character.

Punctuality was the virtue of the electric age; throughput is the virtue of the agentic one.

Humanity’s final keyboard, source unknown via Ben Mathes

We’ve been here before.

The factory bell, the railway timetable, the telegraph wire, the always-on smartphone — each imposed a new temporal discipline, each produced its own characteristic form of exhaustion, and each was eventually (partially, imperfectly) domesticated through a combination of regulation, design, and collective action.

The question is whether we can do that faster this time.

Or whether — per Rosa’s paradox — acceleration makes the process of adapting to acceleration itself harder. I suspect it’s the latter, but I’d quite like to be wrong.

Let’s see.


Some of the thinking here draws on ThompsonSchivelbuschCareyMarvinRooneyVirilioRosaGuattariCrary, and Sharma — a bibliography of people who’ve been worrying about what machines do to time for rather longer than the current AI discourse might suggest. The BCG/HBR brain fry study is by Bedard, Kropp, Hsu, Karaman, Hawes, and Kellerman. Steve Yegge’s The AI Vampire” and Gas Town are essential reading on the lived experience of multi-agent orchestration.


Colophon: how this was made

It would be dishonest not to mention this, given what the post is about.

Azeem’s piece — the one that started this — was partly authored by his AI agent. So here we are: an agent-assisted post about agent-assisted posts about the experience of working with agents.

Turtles all the way down, etc.

This piece was written with Claude, over the course of a single session. The process went roughly like this: I had a cluster of half-connected thoughts — Azeem’s “wall-clock time” phrase, the BCG brain fry study, Yegge’s AI Vampire, a memory of Carolyn Marvin, the danmaku thing that occurred to me while trying to explain what agent-wrangling feels like, and a book on my shelf I’d been meaning to think harder about (Rooney). I knew there was a thread running through them but I hadn’t pulled it taut.

What Claude did, in machinic time, was the research legwork: finding and synthesising the Thompson-Carey-Virilio-Rosa-Guattari lineage, pulling together the BCG study’s specific data points, confirming citations, searching for connections I suspected existed but hadn’t verified. It produced structured research notes, then a set of blog post ideas, then a draft. Each round took minutes of wall-clock time and involved the kind of parallel literature review that would have taken me days of reading and note-taking.

What I did, in human time, was something different.

I provided the initial constellation of ideas — the specific intellectual connections that felt interesting rather than merely logical. I pushed back on structure and emphasis. I said “does danmaku connect to this?” and “there’s a Bullet Farm in Mad Max” and “what about Rooney’s electric time as morality?” — the sideways moves, the half-remembered things that might or might not be relevant. Honestly at points I felt like a court jester or the class clown in the seminar. I also read drafts with my own sense of voice and rhythm and cut or redirected when it didn’t feel right. The style guide helped here — Claude had a description of how I write, which is a strange thing to hand over, like giving someone your gait analysis and asking them to walk for you.

I don’t think this invalidates the post — if anything, it’s evidence for it. But I wanted to show the working, because it seems important to be honest about the means of production when the means of production are the subject.

The result is something I couldn’t have written this fast alone (or at all?), and something Claude couldn’t have written at all alone — not because it lacks the ability to string sentences together, but because it didn’t have the initial constellation.

It didn’t know that danmaku and the Bullet Farm and Rooney’s Brno clocks belonged in the same thought. Maybe they don’t according to the embedding space.

That pattern-recognition — this goes with this — was the human contribution. The machine contributed speed, breadth, and a tireless willingness to restructure on demand.

Which is, of course, exactly the dynamic the post describes.

I was the player in the bullet hell, trying to maintain soft-focus across the agent’s outputs, steering by feel rather than tracking every token. It was — at various points — exhilarating and a bit draining. Not quite brain fry, but I could see it from where I was sitting.

The temporal mismatch is real: Claude can produce a 3,000-word draft in seconds, and then you spend twenty minutes reading it with the nagging sense that you should be going faster, that you’re the bottleneck, that the machine is waiting.

Rooney’s moralisation of the clock is right there in the room with you. 

Why aren’t you keeping up?

“Magic notebooks, not magic girlfriends”

The take-o-sphere is awash with responses to last week’s WWDC, and the announcement of “Apple Intelligence”.

My old friend and colleague Matt Webb’s is one of my favourites, needless to say – and I’m keen to try it, naturally.

I could bang on about it of course, but I won’t – because I guess I have already.

Of course, the concept is the easy bit.

Having a trillion-dollar corporation actually then make it, especially when it’s counter to their existing business model is another thing.

I’ll just leave this here from about 6 years ago…

BUT!

What I do want to talk about is the iPad calculator announcement that preceded the broader AI news.

As a fan of Bret Victor, this made me very happy.

As a fan of Seymour Papert it made me very happy.

As a fan of Alan Kay and the original vision of the Dynabook is made me very happy.

But moreover – as someone who has never been that excited by the chatbot/voice obsessions of BigTech, it was wonderful to see.

Of course the proof of this pudding will be in the using, but the notion of a real-time magic notebook where the medium is an intelligent canvas responding as an ‘intelligence amplifier‘ is much more exciting to me than most of the currently hyped visions of generative AI.

I was particularly intrigued to see the more diagrammatic example below, which seemed to belong in the conceptual space between Bret Victor’s Dynamicland and Papert’s Mathland.

I recall when I read Papert’s “Mindstorms” (back in 2012 it seems? ) I got retroactively angry about how I had been taught mathematics.

The ideas he advances for learning maths through play, embodiment and experimentation made me sad that I had not had the chance to experience the subject through those lenses, but instead through rote learning leading to my rejection of it until much later in life.

As he says “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful.”

Perhaps most famously he writes:

“a computer can be generalized to a view of learning mathematics in “Mathland”; that is to say, in a context which is to learning mathematics what living in France is to learning French.”

Play, embodiment, experimentation – supported by AI – not *done* for you by AI.

I mean, I’m clearly biased.

I’ve long thought the assistant model should be considered harmful. Perhaps the Apple approach announced at WWDC means it might not be the only game in town for much longer.

Back at Google I was pursuing concepts of Personal AI with something called Project Lyra, which perhaps one day I can go into a bit more deeply.

Anyway.

Early on Jess Holbrook turned me onto the work of Professor Andy Clark, and I thought I’d try and get to work with him on this.

My first email to him had the subject line of this blog post: “Magic notebooks, not magic girlfriends” – which I think must have intrigued him enough to respond.

This, in turn, led to the fantastic experience of meeting up with him a few times while he was based in Edinburgh and having him write a series of brilliant pieces (for internal consumption only, sadly) on what truly personal AI might mean through his lens of cognitive science and philosophy.

As a tease here’s an appropriate snippet from one of Professor Clark’s essays:

“The idea here (the practical core of many somewhat exotic debates over the ‘extended mind’) is that considered as thinking systems, we humans already are, and will increasingly become, swirling nested ecologies whose boundaries are somewhat fuzzy and shifting. That’s arguably the human condition as it has been for much of our recent history—at least since the emergence of speech and the collaborative construction of complex external symbolic environments involving text and graphics. But emerging technologies—especially personal AI’s—open up new, potentially ever- more-intimate, ways of being cognitively extended.”

I think that’s what I object to, or at least recoil from in the ‘assistant’ model – we’re abandoning exploring loads of really rich, playful ways in which we already think with technology.

Drawing, model making, acting things out in embodied ways.

Back to Papert’s Mindstorms:

“My interest is in the process of invention of “objects-to-think-with,” objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

“…I am interested in stimulating a major change in how things can be. The bottom line for such changes is political. What is happening now is an empirical question. What can happen is a technical question. But what will happen is a political question, depending on social choices.”

The some-what lost futures of Kay, Victor and Papert are now technically realisable.

“what will happen is a political question, depending on social choices.”

The business model is the grid, again.

That is, Apple are toolmakers, at heart – and personal device sellers at the bottom line. They don’t need to maximise attention or capture you as a rent (mostly). That makes personal AI as a ‘thing’ that can be sold much more of viable choice for them of course.

Apple are far freer, well-placed (and of coursse well-resourced) to make “objects-to-think-with, objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

The wider strategy of “Apple Intelligence” appears to be just that.

But – my hope is the ‘magic notebook’ stance in the new iPad calculator represents the start of exploration in a wider, richer set of choices in how we interact with AI systems.

Let’s see.

Chrome Dreams II

Season 4 of “For All Mankind” just started on Apple TV.

I imagine the venn set of folks who still read this and watch is largely overlapped, but for those who don’t it’s a alt-history of the 20thC/21stC space race on a divergent world-line where the USSR got to the moon first. As with most things prestige-tv, that becomes a setting for a soap opera, but hey.

But – the above moment, where one of the main characters communicates from Mars to their daughter was notable for the proustian rush it gave.

A perfect period piece of UI in production design – the brushed chrome and aqua buttons of the early aughts.

To have that kind of historical reference point from a field so profoundly (proudly?) ahistorical as design for technology is a shock.

Partner / Tool / Canvas: UI for AI Image Generators

“Howl’s Moving Castle, with Solar Panels” – using Stable Diffusion / DreamStudio LIte

Like a lot of folks, I’ve been messing about with the various AI image generators as they open up.

While at Google I got to play with language model work quite a bit, and we worked on a series of projects looking at AI tools as ‘thought partners’ – but mainly in the space of language with some multimodal components.

As a result perhaps – the things I find myself curious about are not so much the models or the outputs – but the interfaces to these generator systems and the way they might inspire different creative processes.

For instance – Midjourney operates through a discord chat interface – reinforcing perhaps the notion that there is a personage at the other end crafting these things and sending them back to you in a chat. I found a turn-taking dynamic underlines play and iteration – creating an initially addictive experience despite the clunkyness of the UI. It feels like an infinite game. You’re also exposed (whether you like it or not…) to what others are producing – and the prompts they are using to do so.

Dall-e and Stable Diffusion via Dreamstudio have more of a ‘traditional’ tool UI, with a canvas where the prompt is rendered, that the user can tweak with various settings and sliders. It feels (to me) less open-ended – but more tunable, more open to ‘mastery’ as a useful tool.

All three to varying extents resurface prompts and output from fellow users – creating a ‘view-source’ loop for newbies and dilettantes like me.

Gerard Serra – who we were lucky to host as an intern while I was at Google AIUX – has been working on perhaps another possibility for ‘co-working with AI’.

While this is back in the realm of LLMs and language rather than image generation, I am a fan of the approach: creating a shared canvas that humans and AI co-work on. How might this extend to image generator UI?

Conversation with Google ATAP / Shaping Things 2.0 vs Web3.0

Was interviewed recently by Anastasiia Mozghova for Google ATAP’s twitter feed, where it’s being published as a thread in two parts (part one here).

It centred around ambient computing and work I’ve been involved in around that subject in the past – particularly looking at notions of ‘social grace’ in computing.

This was a concept we discussed a lot in relation to Soli, the radar sensor that Google ATAP invented. I was involved in that project a little at Creative Lab, then more peripherally still (as kind of a cheerleader if anything) at Google Research.

That concept of social grace, for me, begins in the writing of Mark Weiser and continues amplified in Adam Greenfield‘s “Everywhere” – which still has some very relevant nuggets in it, IMHO, for a book on technology written in 2006 (16 years ago at time of writing!)

I also mention another classic – which is fast coming up on it’s 20th anniversary (next year?) – Bruce Sterling’s Shaping Things.

Its eco-centric vision of ‘spimes‘ rhymes (sorry) with the current hype around web3, tokenisation, etc etc – but while that is obsessed with financialisation, Shaping Things instead hints at using data sousveillance means toward resource responsibility, circularity and accountability ends in design and manufacture.

Time for a second edition?

The beginning of BotWorld



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

Jessica Helfand and Tony Stark vs. Don Norman, Paul Dourish and Joss Wheedon

Jessica Helfand of Design Observer on Iron Man’s user-interfaces as part of the dramatis personae:

“…in Iron Man, vision becomes reality through the subtlest of physical gestures: interfaces swirl and lights flash, keyboards are projected into the air, and two-dimensional ideas are instantaneously rendered as three and even four-dimensional realities. Such brilliant optical trickery is made all the more fantastic because it all moves so quickly and effortlessly across the screen. As robotic renderings gravitate from points of light in space into a tangible, physical presence, the overall effect merges screen-based, visual language with a deftly woven kind of theatrical illusion.”

Made me think back to a post I wrote here about three years ago, on “invisible computing” in Joss Wheedon’s “Firefly”.

Firefly touch table

“…one notices that the UI doesn’t get in the way of the action, the flow of the interactions between the bad guy and the captain. Also, there is a general improvement in the quality of the space it seems ? when there are no obtrusive vertical screens in line-of-sight to sap the attention of those within it.”

Firefly touch table

Instead of the Iron Man/Minority Report approach of making the gestural UI the star of the sequence, this is more interesting – a natural computing interface supports the storytelling – perhaps reminding the audience where the action is…

As Jessica points out in her post, it took us some years for email to move from 3D-rendered winged envelopes, to things that audiences had common experience and expectations of.

Three years on from Firefly, most of the audience watching scifi and action movies or tv will have seen or experienced a Nintendo Wii or an iPhone, and so some of the work of moving technology from star to set-dressing is done – no more outlandish or exotic as a setting for exposition than a whiteboard or map-table.

Having said that – we’re still in tangible UIs transition to the mainstream.

A fleeting shot from the new Bond trailer seems to indicate there’s still work for the conceptual UI artist, but surely this now is the sort of thing that really has a procurement number in the MI6 office supplies intranet…

Bond Touch table

And – it’s good to see Wheedon still pursuing tangible, gestural interfaces in his work…

T5: Expectations of agency

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }

I’m in Oslo for a few days, and to get there I went through Heathrow’s new and controversial Terminal Five. After all the stories, and Ryan’s talk on the service design snafus it’s experienced I approached my visit there with excitement and trepidation.
Excitement still, because it’s still a major piece of architecture by Richard Rogers and Partners – and sparkly new airports are, well, sparkly and new.
YMMV, especially as we were travelling off-peak, but – it was pretty calm and smooth sailing all the way. I’m guessing they’ve pulled out all the stops in order to get things on an even-keel.
Saw both pieces installed in the BA Club Lounges by Trokia (‘Cloud’ and ‘All the time in the world’), both of which were lovely – you can get to see them both without having to be a fancypants gold carder, which is good.

The thing that struck me though was the degree of technological automation of previously human-mediated process that were anticipated, designed and built – that then had to be retrofitted with human intervention and signage.
It’s a John Thackara rant waiting to happen, and that’s aside from all the environmental impacts he might comment on!

My favourite was the above sign added to the lifts that stop and start automatically, to make sure you understand that you can’t press anything. Of course, we’re trained to expect agency or at least the simulation of agency in lifts – keeping doors open, selecting floors, pressing our floor button impatiently and tutting to make the lift go faster. Remember that piece in James Gleick’s FSTR where lift engineers deliberately design placebo button presses to keep us impatient humans happy? People still kept pressing the type panels – me included!
To paraphrase Naoto Fukasawa: sometimes design dissolves in behaviour and then quickly sublimates into hastily-printed and laminated signage…