Gas Town and Bullet Hell

Warning: a collection of half-formed thoughts about time, screens, AI agents, and a surprisingly relevant Japanese arcade genre.


This started with a phrase in Azeem Azhar’s piece about his AI agent workflow: “wall-clock time.”

“Two Timer” clock by Industrial Facility

It’s a term of art in programming: the actual elapsed time on the clock on the wall, as opposed to CPU time or token throughput or any other measure of what the machine is doing internally.

I hadn’t come across it before, despite having spent years thinking about time and technology, and it lodged in my head.

The interesting thing there for me about AI agents isn’t just how much they can do, it’s the growing gap between the machine’s time and the human’s time.

An agent can burn through a hundred million tokens in a day. The wall-clock time for the human supervising it is the same twenty-four hours it always was.

And then the BCG/HBR AI brain fry study landed earlier this month. Workers who oversee multiple AI agents report 33% more decision fatigue, 39% more errors, and a distinctive “buzzing” sensation, a mental fog that participants struggled to name until the researchers gave them one – “Brain Fry”. 14% percent of AI-using workers report this brain fry. In marketing, it’s 26%.

Steve Yegge, who’s been building Gas Town: a multi-agent orchestrator for managing colonies of 20+ parallel AI coding agents – wrote about the same phenomenon a few weeks earlier, in a post he called “The AI Vampire.”

His framing was vivid: AI makes you 10x more productive, but the productivity comes at a cost the industry hasn’t named yet. Yegge described sudden “nap attacks”: collapsing into sleep at odd hours after long vibe-coding sessions — and observed that friends at other AI-native startups were reporting the same thing.

His image was Colin Robinson from What We Do in the Shadows: an energy vampire, sitting on your shoulder, drinking while you (it? both?) code.

The work is exhilarating and draining, simultaneously, because AI automates the easy parts and leaves you with an unbroken stream of hard decisions compressed into the same number of hours.

Both accounts are being framed, mostly, as a UX problem (better dashboards), a training problem (up-skill your people), or a management problem (set limits). All valid?

But it seems to me that something else is going on — something older and more structural — and it has to do with clocks.

Time Machine Go!

There’s a long, rich body of work about what technology does to the experience of time, and I keep coming back to it. (I’ve been circling this for a while — a talk at DxF in Utrecht back in 2009, “All the Time in the World,” about how human cultures construct time and how designers might deconstruct and reconstruct it; the grain of spacetime as a design materialantichronos and the compound nature of time; the notion of chronodynamic design.

But the brain fry study has maybe sharpened something for me.

E.P. Thompson’s “Time, Work-Discipline, and Industrial Capitalism” (1967) is the essential starting point. His argument: clock-time is not a natural given. It’s a technology, imposed by the factory system.

Pre-industrial societies worked to task-time — you milked the cow when the cow needed milking, you fished when the tide was right. The mechanical clock and the factory bell imposed a different regime: synchronised, disciplinary, abstract. And crucially, it wasn’t just imposed from above: it was internalised, through schooling, religion, print culture, until it felt like common sense.

James Carey showed how the telegraph extended this further — it could transmit time faster than a train could carry it, which is how we ended up with standardised time zones. The telegraph didn’t just speed up communication; it made wall-clock time universal. And then came the step that I think matters most for where we are now. 

About Time by David Rooney

David Rooney’s About Time traces what happened when precise, synchronised time could be distributed electrically — wired clocks in factories, schools, railway stations, town squares. The Brno electric time system of 1903 is his case study.

Once the infrastructure existed to push accurate time into every public space, clock-discipline stopped being merely an economic requirement and became a moral one.

Punctuality became a virtue. Being on time was being a good citizen, a reliable worker, a decent person. The machinery of timekeeping was internalised so completely that it ceased to look like machinery at all — it looked like character. Electric time could be exported across the industrialised world not just as coordination but as morality.

Carolyn Marvin, in When Old Technologies Were New (1988), demonstrated the same pattern from a different angle: every new medium — telephone, electric light, radio — was received as “new” precisely to the extent that it seemed to annihilate time and distance.

The rhetoric is remarkably consistent across eras.

We’ve been having the same conversation about technology conquering time for about a hundred and fifty years.

So wall-clock time — the time of schedules, meetings, train timetables — was already a technological imposition on older, bodily rhythms.

It’s not the “natural” baseline against which AI’s speed is measured. It’s just the previous generation’s machine. And — per Rooney — it’s not just a machine. It’s a machine that learned to dress up as a moral principle.

But something has shifted. 

Félix Guattari distinguished between human time and machinic time: the former mediated by clocks and institutions, the latter operating at computational speeds that exceed human perception entirely. Hartmut Rosa calls it the “shrinking of the present” — the window in which your past experience reliably predicts the future gets narrower with each acceleration. And Paul Virilio spent decades developing what he called dromology — from the Greek dromos, a racetrack — essentially a science of speed.

Dromology in the DCU: The Speed Force…

His argument was that the history of civilisation is not primarily a history of wealth or territory but of velocity: who controls the fastest, densest barrage controls the territory. Each new speed technology — the stirrup, the railway, the telegraph, the missile, the fibre-optic cable — reshapes not just logistics but perception itself.

Speed doesn’t just let you move more easily; it changes what you can see, hear, and think. Push acceleration far enough and you get what Virilio called the “aesthetics of disappearance” — things moving too fast to be perceived at all. The landscape seen from a bullet train isn’t a landscape anymore; it’s a blur. The high-frequency trade executed in microseconds isn’t a decision anymore; it’s a reflex of infrastructure.

The BCG study’s “buzzing” and “mental fog” sit right in this lineage. Railway passengers in the 1840s reported nervous exhaustion at 30mph — what doctors called “railway spine.

Schivelbusch documented how rail speed literally rewired perception: landscapes became panoramic blurs, attention fragmented, a new kind of fatigue emerged that the medical establishment had no language for. Telegraph operators developed what we’d now recognise as burnout. The body protesting a tempo it didn’t choose.

So maybe, brain fry is the 2026 version of railway spine?

I.E. an embodied protest of a nervous system being asked to run at a tempo it didn’t evolve for.

Brain Fry & Bullet Hell

This came to mind when I was trying to describe the feeling of supervising multiple AI agents to a friend: the way you end up in a state of continuous partial attention, scanning outputs, waiting for something to go wrong, never quite able to look away and I realised the closest analogy I had was danmaku.

For those who haven’t encountered it: danmaku (弾幕, literally “bullet curtain”) is a Japanese arcade genre — sometimes called “bullet hell” — where the screen fills with hundreds of projectiles in elaborate, spiralling patterns. The player’s ship is tiny. The bullets are everywhere. The whole point is overwhelm. Games like TouhouDoDonPachiIkaruga.

Beautiful, punishing, compulsive.

I think Ikaruga was my introduction to them.

Ikaruga

In danmaku, information throughput exceeds conscious processing — you literally cannot track each bullet individually.

The BCG finding that cognitive load spikes after three AI tools describes the same saturation point: too many concurrent streams of machine-speed output for a single human to monitor serially.

Touhou

But – expert danmaku players don’t get faster. They change how they see.

They shift from focused attention (tracking individual bullets) to a kind of peripheral soft-focus — reading patterns, finding the safe channel through the barrage. It’s a perceptual shift, not a speed upgrade. And it leads, reliably, to flow states. Csikszentmihalyi’s sweet spot: challenge meets skill, self-consciousness dissolves, time distorts in the good way. Players describe it as exhilarating.

So: a human being synchronises their nervous system to machinic time, processes hundreds of parallel streams of machine-speed output, and the result is exhilaration.

Meanwhile, another human being supervises three AI agents producing parallel text outputs at roughly the same structural tempo, and the result is brain fry.

Same physics. Opposite feeling.

I think 3 things account for that gap.

First, consent. The danmaku player chooses the machine’s tempo. That’s the game — you opt in. The knowledge worker has it imposed by a productivity mandate. Thompson again: the difference between dancing and marching is who sets the beat. The factory bell and the AI agent notification are structurally identical — both impose a rhythm from outside the body. One is discipline, the other is play, depending entirely on the power relationship.

Second, legibility. Bullets are unambiguous. A bullet is a threat, a gap is safety, the feedback loop is instant and total. AI agent output requires continuous evaluative judgment — is this correct? relevant? hallucinated? — which loads a different, slower cognitive system on top of the tracking task. You’re playing bullet hell, except some of the bullets might be power-ups, but you can’t tell until you stop and read them carefully. Which rather defeats the purpose of the soft-focus.

Third, reversibility. Die in danmaku, you lose a life and restart. The stakes are emotional, not consequential. If I miss a sloppy AI output — a hallucinated fact, a wrong number, an email sent with your name on it — the damage is real, IRL. The fear of consequential failure however small prevents exactly the relaxed alertness that flow requires.

An excursion to The Bullet Farm

There’s an etymological thing here that I find quite evocative.

弾幕 — danmaku — starts as a military term.

A barrage. Suppressive fire. The purpose isn’t to hit specific targets but to make an entire zone impassable.

The word migrates to arcade games in the 1990s, where the screen becomes the impassable zone.

Then it migrates again to Niconico Douga in the 2000s, where it describes the dense scrolling comment overlays that cover the video — thousands of viewer comments streaming across simultaneously. A curtain of text.

Three instances of the same image: a barrage of projectiles, a barrage of pixels, a barrage of words.

And then (this is where it gets a bit more indulgent, but bear with me) there’s George Miller’s Fury Road.

The Bullet Farmer.

One of three warlords controlling essential resources in a post-apocalyptic economy — water, fuel, ammunition.

His power isn’t that he uses the bullets; it’s that he controls their supply. He doesn’t need to aim. He just needs to fill the zone. Dromology again: whoever controls the fastest, densest barrage controls the territory.

It’s not lost on me that Yegge named his multi-agent orchestrator after the Fury Road settlement. Gas Town — the place that refines and distributes fuel.

In Miller’s economy, Gas Town, the Bullet Farm, and the Citadel form a tripartite monopoly on the resources that make movement, violence, and survival possible.

Yegge’s Gas Town manages the fuel supply for AI coding agents — the orchestration layer that keeps the colony of twenty-plus agents running. But the Bullet Farm is maybe the bit nobody’s building yet: the thing that manages the barrage of outputs those agents produce, and the human attention required to survive it.

Think about this in relation to the AI landscape more broadly. The competitive advantage isn’t in any single agent’s output quality — it’s in the sheer volume and speed of the barrage. Flood the workspace with tools, agents, copilots. The worker, like Furiosa, has to find a path through it.

So the word carries four registers: military (suppress movement), ludic (overwhelm as play), communal (overwhelm as shared experience), and political-economic (overwhelm as resource monopoly). Each preserves the core logic — the barrage as design feature, not failure — but the human’s relationship to it changes completely depending on context.

And AI agent oversight is arguably the first context where the barrage is accidental.

Nobody designed multi-agent workflows to feel like bullet hell.

And yet.

The design problem this reveals

If brain fry is a clock problem — a temporal mismatch between human cognition and machinic speed — then solutions that only address interface design or training will help at the margins but miss the structural issue.

Just as telling 1840s railway passengers to “get used to it” didn’t prevent nervous illness.

The danmaku analogy suggests a different set of questions.

If we want AI agent work to feel more like flow and less like fry, the challenge isn’t making things faster or even slower — it’s about legibility, consent, and reversibility, and all three matter at once.

Legibility first: can agent outputs be designed to be scannable as patterns rather than read as individual documents?

Not better summaries — actual visual or structural affordances that let you soft-focus and spot the anomaly, the way a danmaku player spots the gap in the curtain.

Something closer to a radar screen than a text feed.

Then consent: can workers set their own review tempo? Asynchronous handoffs rather than real-time monitoring. What Sarah Sharma calls “temporal sovereignty” — the right to set your own pace.

The BCG data shows that AI reduces burnout when it offloads repetitive work and increases it when it demands oversight. The variable is who controls the clock.

And reversibility: can we lower the stakes of missing something?

Undo, rollback, draft-before-send, human-in-the-loop-but-not-human-as-the-loop. If the consequence of missing a bad output is catastrophic, the nervous system clenches into hypervigilance.

If it’s recoverable, the nervous system can relax into the peripheral awareness that actually works better for this kind of monitoring.

Anyone remember Braid?

Maybe there’s a hybrid of Braid and git that we need.

I keep coming back to Marvin’s insight that technologies are not fixed natural objects but “constructed complexes of habits, beliefs, and procedures embedded in elaborate cultural codes.” The temporal regime of multi-agent AI work isn’t inevitable — it’s being constructed right now, through design choices and management practices and vendor incentives and labour relations. And — this is the Rooney point again — it’s already being moralised.

Not using AI is starting to be framed as if it’s professional negligence. Not keeping up with the agents feels like a personal failing, not a structural mismatch. The Brno electric clock trick is happening again: a new tempo imposed by infrastructure, dressed up as character.

Punctuality was the virtue of the electric age; throughput is the virtue of the agentic one.

Humanity’s final keyboard, source unknown via Ben Mathes

We’ve been here before.

The factory bell, the railway timetable, the telegraph wire, the always-on smartphone — each imposed a new temporal discipline, each produced its own characteristic form of exhaustion, and each was eventually (partially, imperfectly) domesticated through a combination of regulation, design, and collective action.

The question is whether we can do that faster this time.

Or whether — per Rosa’s paradox — acceleration makes the process of adapting to acceleration itself harder. I suspect it’s the latter, but I’d quite like to be wrong.

Let’s see.


Some of the thinking here draws on ThompsonSchivelbuschCareyMarvinRooneyVirilioRosaGuattariCrary, and Sharma — a bibliography of people who’ve been worrying about what machines do to time for rather longer than the current AI discourse might suggest. The BCG/HBR brain fry study is by Bedard, Kropp, Hsu, Karaman, Hawes, and Kellerman. Steve Yegge’s The AI Vampire” and Gas Town are essential reading on the lived experience of multi-agent orchestration.


Colophon: how this was made

It would be dishonest not to mention this, given what the post is about.

Azeem’s piece — the one that started this — was partly authored by his AI agent. So here we are: an agent-assisted post about agent-assisted posts about the experience of working with agents.

Turtles all the way down, etc.

This piece was written with Claude, over the course of a single session. The process went roughly like this: I had a cluster of half-connected thoughts — Azeem’s “wall-clock time” phrase, the BCG brain fry study, Yegge’s AI Vampire, a memory of Carolyn Marvin, the danmaku thing that occurred to me while trying to explain what agent-wrangling feels like, and a book on my shelf I’d been meaning to think harder about (Rooney). I knew there was a thread running through them but I hadn’t pulled it taut.

What Claude did, in machinic time, was the research legwork: finding and synthesising the Thompson-Carey-Virilio-Rosa-Guattari lineage, pulling together the BCG study’s specific data points, confirming citations, searching for connections I suspected existed but hadn’t verified. It produced structured research notes, then a set of blog post ideas, then a draft. Each round took minutes of wall-clock time and involved the kind of parallel literature review that would have taken me days of reading and note-taking.

What I did, in human time, was something different.

I provided the initial constellation of ideas — the specific intellectual connections that felt interesting rather than merely logical. I pushed back on structure and emphasis. I said “does danmaku connect to this?” and “there’s a Bullet Farm in Mad Max” and “what about Rooney’s electric time as morality?” — the sideways moves, the half-remembered things that might or might not be relevant. Honestly at points I felt like a court jester or the class clown in the seminar. I also read drafts with my own sense of voice and rhythm and cut or redirected when it didn’t feel right. The style guide helped here — Claude had a description of how I write, which is a strange thing to hand over, like giving someone your gait analysis and asking them to walk for you.

I don’t think this invalidates the post — if anything, it’s evidence for it. But I wanted to show the working, because it seems important to be honest about the means of production when the means of production are the subject.

The result is something I couldn’t have written this fast alone (or at all?), and something Claude couldn’t have written at all alone — not because it lacks the ability to string sentences together, but because it didn’t have the initial constellation.

It didn’t know that danmaku and the Bullet Farm and Rooney’s Brno clocks belonged in the same thought. Maybe they don’t according to the embedding space.

That pattern-recognition — this goes with this — was the human contribution. The machine contributed speed, breadth, and a tireless willingness to restructure on demand.

Which is, of course, exactly the dynamic the post describes.

I was the player in the bullet hell, trying to maintain soft-focus across the agent’s outputs, steering by feel rather than tracking every token. It was — at various points — exhilarating and a bit draining. Not quite brain fry, but I could see it from where I was sitting.

The temporal mismatch is real: Claude can produce a 3,000-word draft in seconds, and then you spend twenty minutes reading it with the nagging sense that you should be going faster, that you’re the bottleneck, that the machine is waiting.

Rooney’s moralisation of the clock is right there in the room with you. 

Why aren’t you keeping up?

This is tomorrow

“While something is “tomorrow,” institutions can hold it at arm’s length: debate it, study it, delay it, run pilots, treat it as optional. The moment it becomes “today,” the debate stops being about feasibility and becomes about distribution, governance, and consequence. The bottleneck shifts away from engineering and toward whether we have the institutional capacity to absorb a new infrastructural layer arriving faster than our social contracts can update.”

This from Indy Johar’s latest newsletter in relation to his first ride in a Waymo.

It resonates for me instead for this last few weeks buzz around Clawdbot and personal AI agents.

As someone who worked speculatively on this back in 2018-2020 it was “tomorrow” – it seems at least now the glimmers are “today”

Of course a significant amount of what we have right now is personalised rather than personal.

You are still a client of a centralised service, even if Clawdbot is running on the shiny new Mac mini you bought.

Lyra (and its parents: Cerebra and Oak) were predicated on “sovereign” personal AI – but maybe that will be “today” rather than tomorrow soon also…

Station Identification: Lady Chatterley’s New Year Message

To be clear, I’ve never read the book, but came to this passage via a circuitous route.

Probably my favourite podcast of the last few years is David Runciman’s “Past, Present Future”.

It has a series of episodes on great legal trials – one of which featured the landmark obscenity trial of D. H. Lawrence’s “Lady Chatterley’s Lover”.

As always, it was a fascinating deep-dive. He shared that it was his first reading the book – in order to cover the trial‘s repercussions and reflection of British society at the time.

As a result, he released a further episode on the book itself – when I heard the opening lines – which I found perhaps as resonant for 2026 as for when it was published almost one hundred years ago.

“Ours is essentially a tragic age, so we refuse to take it tragically.

The cataclysm has happened, we are among the ruins, we start to build up new little habitats, to have new little hopes.

It is rather hard work: there is now no smooth road into the future: but we go round, or scramble over the obstacles.

We’ve got to live, no matter how many skies have fallen.”

Happy new year.

“Back to BASAAP” – My talk at ThingsCon 2025 in Amsterdam

Last Friday I had the pleasure of speaking at ThingsCon in Amsterdam, invited by Iskander Smit to join a day exploring this year’s theme of ‘resize/remix/regen’.

The conference took place at CRCL Park on the Marineterrein – a former naval yard that’s spent the last 350 years behind walls, first as the Dutch East India Company’s shipbuilding site (they launched Michiel de Ruyter‘s fleet from here in 1655), then as a sealed military base.

Since 2015 it’s been gradually opening up as an experimental district for urban innovation, with the kind of adaptive reuse that gives a place genuine character.

The opening keynote from Ling Tan and Usman Haque set a thoughtful and positive tone, and the whole day had an unusual quality – intimate scale, genuinely interactive workshops, student projects that weren’t just pinned to walls but actively part of the conversation. The kind of creative energy that comes from people actually making things rather than just talking about making things.

My talk was titled “Back to BASAAP” – a callback to work at BERG, threading through 15-20 years of experiments with machine intelligence.

The core argument (which I’ve made in the Netherlands before…): we’ve spent too much time trying to make AI interfaces look and behave like humans, when the more interesting possibilities lie in going beyond anthropomorphic metaphors entirely.

What happens when we stop asking “how do we make this feel like talking to a person?” and start asking “what new kinds of interaction become possible when we’re working with a machine intelligence?”

I try i the talk to update my thinking here with the contemporary signals around more-than human design and also more-than-LLM approaches to AI, namely so-called “World Models”.

What follows are the slides with my speaker notes – the expanded version of what I said on the day, with the connective tissue that doesn’t make it into the deck itself.

One of the nice things about going last is that you can adjust your talk and slides to include themes and work you’ve seen throughout the day – and I was particularly inspired by Ling Tan and Usman Haque’s opening keynote.

Thanks to Iskander and the whole ThingsCon team for the invitation, and to everyone who came up afterwards with questions, provocations, and adjacent projects I need to look at properly.



Hi I’m Matt – I’m a designer who studied architecture 30 years ago, then got distracted.

Around 20 years ago I met a bunch of folks in this room, and also started working on connected objects, machine intelligence and other things… Iskander asked me to talk a little bit about that!

I feel like I am in a safe space here, so imagine many of you are like me and have a drawer like this, or even a brain like this… so hopefully this talk is going to have some connections that will be useful one day!

So with that said, back to BERG.

We were messing around with ML, especially machine vision – very early stuff – e.g. this experiment we did in the studio with Matt Biddulph to try and instrument the room, and find patterns of collaboration and space usage.

And at BERG we tended to have some recurring themes that we would resize and remix throughout out work.

BASAAP was one.

BASAAP is an acronym for Be As Smart As A Puppy – which actually I think first popped into my head while at Nokia a few years earlier.

It alludes to this quote from MIT roboticist and AI curmudgeon Rodney Brooks who said if we get the smartest folks together for 50 years to work on AI we’ll be lucky if we can make it as a smart as a puppy.


I guess back then we thought that puppy-like technologies in our homes sounded great!

We wanted to build those.

Also it felt like all the energy and effort to make technology human was kind of a waste.

We thought maybe you could find more delightful things on the non-human side of the uncanny valley…

And implicit in that I guess was a critique of the mainstream tech drive at the time (around the earliest days of Siri, Google Assistant) around voice interfaces, which was a dominant dream.

A Google VP at the time stated that their goal was to create ‘the star trek computer’.

Our clients really wanted things like this, and we had to point out that voice UIs are great for moving the plot of tv shows along.


I only recently (via the excellent Futurish podcast) learned this term – ‘hyperstition’ – a self-fulfilling idea that becomes real through its own existence (usually in movies or other fictions) e.g. flying cars

And I’d argue we need to be critically aware of them still in our work…

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

Whatever your position on them, LLMs are in a hyperstitial loop right now of epic proportions.

Disclaimer: I’ve worked on them, I use them. I still try and think critically of them as material

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

And while it can feel like we have crossed the uncanny valley there, I think we can still look to the BASAAP thought to see if there’s other paths we can take with these technologies.

https://whatisintelligence.antikythera.org/

My old boss at Google, Blaise Agüera y Arcas has just published this fascinating book on the evolutionary and computational basis of intelligence.

In it he frames our current moment as the start of a ‘symbiosis’ of machine and human intelligence, much as we can see other systems of natural/artificial intelligences in our past – like farming, cities, economies.

There’s so much in there – but this line from an accompanying essay in Nature brings me back to BASAAP. “Their strengths and weaknesses are certainly different from ours” – so why as designers aren’t we exposing that more honestly?


In work I did in Blaise’s group at Google in 2018 we examined some ways to approach this – by explicitly surfacing an AI’s level of confidence in the UX.

Here’s a little mock up of some work with Nord Projects from that time where we imagined dynamic UI that was built by the agent to surface it’s uncertainties to its user – and right up to date – papers published at the launch of Gemini 3 where the promise of generated UI could start to support stuff like that.


And just yesterday this new experimental browser ‘Disco’ was announced by Google Labs – that builds mini-apps based on what it thinks you’re trying to achieve…


But again lets return to that thought about Machine Intelligence having a symbiosis with the human rather than mimicking it…


There could be more useful prompts from the non-human side of the uncanny valley… e.g. Spiders

I came across this piece in Quanta https://www.quantamagazine.org/the-thoughts-of-a-spiderweb-20170523/ some years back about cognitive science experiments on spiders revealing that their webs are part of their ‘cognitive equipment’. The last paragraph struck home – ‘cognition to be a property of integrated nonbiological components’

And… of course…

In Peter Godfrey-Smith’s wonderful book he explores different models of cognition and consciousness through the lens of the octopus.

What I find fascinating is the distributed, embodied (rather than centralized) model of cognition they appear to have – with most of their ‘brains’ being in their tentacles…

I have always found this quote from ETH’s Bertrand Meyer inspiring… No need for ‘brains’!!!


“H is for Hawk” is a fantastic memoir of the relationship between someone and their companion species. Helen McDonald writes beautifully about the ‘her that is not her’

(footnote: I experimented with search-and-replace in her book here back in 2016: https://magicalnihilism.com/2016/06/08/h-is-for-hawk-mi-is-for-machine-intelligence/)

This is CavCam and CavStudio – more work by Nord Projects, with Alison Lentz, Alice Moloney and others in Google Research examining how these personalised trained models could become intelligent reactive ‘lenses’ for creative photography.

We could use AI in creating different complimentary ‘umwelts’ for us.


I’m sure many of you are familiar with Thomas Nagel’s 1974 piece – ‘What is it like to be a bat” – well, what if we can know that?

BAAAAT!!!

This ‘more than human’ approach to design is evident in the room and the zeitgeist for some time now.

We saw it beautifully in Ling Tan and Usman Haque’s work and practice this morning, and of course it’s been wonderfully examined in James Bridle’s writing and practice too.

Perhaps surprising is that the tech world is heading there too perhaps.

There’s a growing suspicion among AI researchers – voiced at their big event NeurIPS just a weekor so – that the language model will need to be supplanted or at least complemented by other more embodied and physical approaches, including what are getting categorised as “World Models” – playing in the background is video from Google Deepmind’s announcement of the autumn on this agenda.

Fei-Fei Li (one of the godmothers of the current AI boom) has a recent essay on substack exploring this.

“Spatial Intelligence is the scaffolding upon which our cognition is built. It’s at work when we passively observe or actively seek to create. It drives our reasoning and planning, even on the most abstract topics. And it’s essential to the way we interact—verbally or physically, with our peers or with the environment itself.”

Here are some old friends from Google who have started a company – Archetype AI – looking at physical world AI models that are built up from a multiplicity of real-time sensor data…

As they mention the electrical grid – here’s some work from my time at solar/battery company Lunar Energy in 2022/23 that can illustrate the potential for such approaches.

In Japan, Lunar have a large fleet of batteries controlled by their Lunar AI platform. You can perhaps see in the time-series plot the battery sites ‘anticipating’ the approach of the typhoon and making sure they are charged to provide effective backup to the grid.

Together with my old BERG colleague Tom Armitage some experiments at Lunar to bring these network behaviours to life with sound and data visualisations.

Maybe this is… “What is it like to be a BAT-tery’.

Sorry…


I think we might have had our own little moment of sonic hyperstition there…

So, to wrap up.

This summer I had an experience I have never had before.

I was caught in a wildfire.

I could see it on a map from space, with ML world models detecting it – but also with my eyes, 500m from me.

I got out – driving through the flames.

But it was probably the most terrifying thing that has ever happened to me… I was lucky. I was a tourist. I didn’t have to keep living there.

But as Ling and Usman pointed out – we are in a world now where these types of experiences are going to become more and more common.

And as they said – the only way out is through.

This is an Iceye Gen4 Synthetic Aperture Radar satellite – designed and built in Europe.

Here’s some imagery from this past week they released of how they’ve been helping emergency response to the flooding in SE Asia – e.g. Sri Lanka here with real-time imaging.

But as Ling said this morning – we can know more and more but it might not unlock the regenerative responses we need on its own.

How might we follow their example with these new powerful world modelling technologies?

As well as Ling and Usman’s work, responses like the ‘Resonant Computing’ manifesto (disclosure: I’m a co-signee/supporter) and the ‘planetary sapience’ visions of the Antikythera organisation give me hope we can direct these technologies to resize, remix and regenerate our lives and the living systems of the planet.

The AI-assisted permaculture future depicted in Ruthana Emrys’ “A Half-Built Garden” gives me hope.

The rise of bioregional design as championed by the Future Observatory at the Design Museum in London gives me hope.

And I’ll leave you with the symbiotic nature/AI hope of my friends at Superflux and their project that asks, I guess – “What is it like to be a river?”…

https://superflux.in/index.php/work/nobody-told-me-rivers-dream/#

THANK YOU.

Speaking at ThingsCon 25 in Amsterdam: “Back to BASAAP”

I’ll be giving the closing keynote at ThingsCon 2025 in Amsterdam on Friday December 12th.

Thank you Iskander Smit and crew for inviting me!

The working title I gave them: “Back to BASAAP

BASAAP stands for “Be As Smart As A Puppy” – something I originally wrote on a (physical) post it note back when I worked at Nokia in 2005 or 2006. For the past 20 years I’ve been exploring metaphors and experiences that might arise from the technology we call ‘AI’ – and while a lot of us now talk to LLMs every day – they still might not… B… ASAAP…”

Very pleased that I’ll be sharing the stage with old friends like James Bridle, Usman Haque and Kars Alfrink – and meeting new ones I’m sure.

The theme and line-up looks great – hope to see you there!

A year at Miro

I joined Miro a year ago this week, back in November 2024.

In my first few weeks I wrote down and shared with the team a few assumptions / goals / thoughts / biases / priors as a kind of pseudo-manifesto for how I thought we might proceed with AI in Miro, and I thought I’d dust them off.

About a month ago we released a bunch of AI features that the team did some amazing work on, and will continue to improve and iterate upon.

If I squint I can maybe see some of this in there, but of course a) it takes a village and b) a lot changed in both the world of AI and Miro in the course of 2025.

My contribution to what made it out there was ultimately pretty minimal, and all kudos for the stellar work that got launched go to Tilo Krueger, Roana Bilia, Mauricio Wolff, Sophia Omarji, Jai McKenzie, Shreya Bhardwaj, Ard Blok Ahmed Genaidy, Ben Shih, Robert Kortenoeven, Andy Cullen, Feena O’Sullivan, Anna Gadomska, George Radu, Rune Schou, Kelly Dorsey, Maiko Senda and many many more brilliant design, product and engineering colleagues at Miro.

Anyway – FWIW I thought it would still be fun to post what I thought I year ago, as there might be something useful still there, accompanied by some slightly-odd Sub-Gondry stylings from Veo3


Multiplayer / Multispecies

When we are building AI for Miro always bear in mind the human-centred team nature of innovation and making complex project work. Multiplayer scenarios are always the start of how we consider AI processes, and the special sauce of how we are different to other AI tools.


Minds on the Map

The canvas is a distinct advantage for creating an innovation workspace – the visibility and context than can be given to human team members should extend to the AI processes that can be brought to bear on it. They should use all the information created by human team members on the canvas in their work.


Help both Backstage & On-Stage

Work moves fluidly from unstructured and structured modes, asynchronous and synchronous, solo and team work – and there are aspects of preparation and performance to all of these. AI processes should work fluidly across all of them.


AI is always Non-Destructive

All AI processes aim to preserve and prioritise work done by human teams.


AI gets a Pencil, Humans get a Pen

Anything created by an AI process (initially) has a distinct visual/experiential identity so that human team members can identify it quickly.


No Teleporting

Don’t teleport users to a conclusion.

Where possible, expose the ‘chain of thought’ that the AI process so that users can understand how it arrived at the output, and edit/iterate on it.


AIs leave visible (actionable) evidence

Where possible, expose the AI processes’ ‘chain of thought’ on the board so that users can understand how it arrived at the output, and edit/iterate on it. Give hooks into this for integrations, and make context is well logged in versions/histories.


eBikes for the Mind

Humans always steer and control – but AI processes can accelerate and compress the distances travelled. They are mostly ‘pedal-assistance’ rather than self-driving.


Help do the work of the work

What are the AI processes that can accelerate or automate the work around the work e.g. taking notes, scheduling, follows ups, organising, coordinating: so that the human team mates can get on with the things they do best.


Using Miro to use Miro

Eventually, AI processes in Miro extend in competence to instigate and initiate work in teams in Miro. This could have its roots in composable workflows and intelligent templates, but extend to assembling/convening/facilitating significant amounts of multiplayer/multispecies work on an indvidual’s behalf.


My Miro AI

What memory / context can I count on to bring to my work, that my agents or my team can use. How can I count on my agents not to start from scratch each time? Can I have projects I am working on with my agents over time? Are my agents ‘mine’? Can I bring my own AI, visualise and control other AI tools in Miro or export the work of Miro agents to other tools, or take it with me when I move teams/jobs (within reason). Do my agents have resumes?


The City is (still) a battlesuit for surviving the future.

Just watched Sir Norman Foster present at the World Design Congress in London, on cities and urbanism as a defence against climate change.

This excellent image visualises household carbon footprints – highlighting in coincidental green the extreme efficiency of NYC compared to the surrounding suburban sprawl of the emerging BAMA.

Sir Norman Foster presenting at the World Design Congress in London, discussing urbanism and climate change while a colorful map of household carbon footprints in New York City is displayed.

16 years ago this September, while at BERG, I wrote a piece at the invitation of Annalee Newitz for a science fiction focussed blog called io9 called “The City is a battlesuit for surviving the future.

It’s still there: bit-rotted, battered, and oozing dangerously-outdated memetic fluids, like a Mark 1 Jaeger.

Bruce Sterling was (obliquely) very nice about it at the time, and lots of other folks wrote interesting (and far-better written) rebuttals.

I thought as it’s a 16 year old now, I should check in on it, with some distance, and give it a new home here.

I thankfully found my original non-edited google doc that I shared with Annalee, and it’s pasted below…

My friend Nick Foster is giving the closing keynote at the event Sir Norman spoke at tomorrow. He just wrote an excellent book on our attitudes to thinking about futures called “Could Should Might Don’t” – which I heartily recommend.

My little piece of amateur futurism from 2009 has a dose of all four – but for the reasons Sir Norman pointed out, I think it’s still a ‘Could’.

And… Still a ‘Should’.

The City is (still) a battlesuit for surviving the future.

Now, 16 yrs later, we ‘Might‘ build it up from Kardashev Streets


[The following is my unedited submission to io9.com, published 20th September 2009]

The city is a battlesuit for surviving the future.

Looking at the connections between architects and science-fiction’s visions of future cities

In February of this year I gave a talk at webstock in New Zealand, entitled “The Demon-Haunted World” – which investigated past visions of future cities in order to reflect upon work being done currently in the field of ‘urban computing’.

In particular I examined the radical work of influential 60’s architecture collective Archigram, who I found through my research had coined the term ‘social software’ back in 1972, 30 years before it was on the lips of Clay Shirky and other internet gurus.

Rather than building, Archigram were perhaps proto-bloggers – publishing a sought-after ‘magazine’ of images, collage, essays and provocations regularly through the 60s which had an enormous impact on architecture and design around the world, right through to the present day. Archigram have featured before on io9 [http://io9.com/5157087/a-city-that-walks-on-giant-actuators], and I’m sure they will again.

Archigram's "Walking City" Project: An artistic depiction of a futuristic city designed to be mobile, with mechanical elements and skyscrapers in the background, representing a conceptual vision of urban living.

They referenced comics – American superhero aesthetics but also the stiff-upper-lips and cut-away precision engineering of Frank Hampson’s Dan Dare and Eagle, alongside pop-music, psychedelia, computing and pulp sci-fi and put it in a blender with a healthy dollop of Brit-eccentricity. They are perhaps most familiar from science-fictional images like their Walking City project, but at the centre of their work was a concern with cities as systems, reflecting the contemporary vogue for cybernetics and belief in automation.

Exterior view of the Pompidou Centre in Paris, showcasing its unique architectural design with exposed structural elements and colorful escalators.

Although Archigram didn’t build their visions, other architects brought aspects of them into the world. Echoes of their “Plug-in city” can undoubtedly be seen in Renzo Piano and Richard Rogers’ Pompidou Centre in Paris.

Much of the ‘hi-tech’ style of architecture (chiefly executed by British architects such as Rogers, Norman Foster and Nicholas Grimshaw) popular for corporate HQs and arts centers through the 80s and 90s can be traced back to, if not Archigram, then the same set of pop sci-fi influences that a generation of British schoolboys grew up with – into world-class architects.

Lord Rogers, as he now is, has made a second career of writing and lobbying about the future of cities worldwide. His books “Cities for a small planet” and “Cities for a small country” were based on work his architecture and urban-design practice did during the 80s and 90s, consulting on citymaking and redevelopment with national and regional governments. His work for Shanghai is heavily featured in ‘small planet’ – a plan that proposed the creation of an ecotopian mega city. This was thwarted, but he continues to campaign for renewed approaches to urban living.

Colorful graphic featuring a futuristic city skyline with the text 'People Are Walking Architecture' prominently displayed. The design includes abstract shapes and visual elements associated with urban architecture and the concept of people as integral to city structures.

Last year I saw him give a talk in London where he described the near-future of cities as one increasingly influenced by telecommunications and technology. He stated that “our cities are increasingly linked and learning” – this seemed to me a recapitulation of Archigram’s strategies, playing out not through giant walking cities but smaller, bottom-up technological interventions. The infrastructures we assemble and carry with us through the city – mobile phones, wireless nodes, computing power, sensor platforms are changing how we interact with it and how it interacts with other places on the planet. After all it was Archigram who said “people are walking architecture”

Dan Hill (a consultant on how digital technology is changing cities for global engineering group Arup) in his epic blog post “The Street as Platform” [http://www.cityofsound.com/blog/2008/02/the-street-as-p.html] says “…the way the street feels may soon be defined by what cannot be seen by the naked eye”.

He goes on to explain:

“We can’t see how the street is immersed in a twitching, pulsing cloud of data. This is over and above the well-established electromagnetic radiation, crackles of static, radio waves conveying radio and television broadcasts in digital and analogue forms, police voice traffic.  This is a new kind of data, collective and individual, aggregated and discrete, open and closed, constantly logging impossibly detailed patterns of behaviour. The behaviour of the street.”

Adam Greenfield, a design director at Nokia wrote one of the defining texts on the design and use of ubiquitous computing or ‘ubicomp’ called “Everyware” [http://www.studies-observations.com/everyware/] and is about to release a follow-up on urban environments and technology called “The city is here for you to use”.

In a recent talk he framed a number of ways in which the access to data about your surroundings that Hill describes will change our attitude towards the city. He posits that we will move from a city we browser and wander to a ‘searchable, query-able’ city that we can not only read, but write-to as a medium.

He states

“The bottom-line is a city that responds to the behaviour of its users in something close to real-time,  and in turn begins to shape that behaviour”

Again, we’re not so far away from what Archigram were examining in the 60’s. Behaviour and information as the raw material to design cities with as much as steel, glass and concrete.

The city of the future increases its role as an actor in our lives, affecting our lives.

This of course, is a recurrent theme in science-fiction and fantasy. In movies, it’s hard to get past the paradigm-defining dystopic backdrop of the city in Bladerunner, or the fin-de-siècle late-capitalism cage of the nameless, anonymous, bounded city of the Matrix.

Perhaps more resonant of the future described by Greenfield is the ever-changing stage-set of Alex Proyas’ “Dark City”.

For some of the greatest-city-as-actor stories though, it’s perhaps no surprise that we have to turn to comics as Archigram did – and the eponymous city of Warren Ellis and Darrick Robertson’s Transmetropolitan as documented and half-destroyed by gonzo future journalist-messiah Spider Jerusalem.

Transmet’s city binds together perfectly a number of future-city fiction’s favourite themes: overwhelming size (reminiscent of the BAMA, or “Boston-Atlanta Metropolitan Axis from William Gibson’s “Sprawl” trilogy),  patchworks of ‘cultural reservations’ (Stephenson’s Snowcrash with it’s three-ring-binder-governed, franchise-run-statelets) and a constant unrelenting future-shock as everyday as the weather… For which we can look to the comics-futrue-city grand-daddy of them all: Mega-City-1.

Ah – The Big Meg, where at any moment on the mile-high Zipstrips you might be flattened by a rogue Boinger, set-upon by a Futsie and thrown down onto the skedways far below, offered an illicit bag of umpty-candy or stookie-glands and find yourself instantly at the mercy of the Judges. If you grew up on 2000AD like me, then your mind is probably now filled with a vivid picture of the biggest toughest, weirdest future city there’s ever been.

This is a future city that has been lovingly-detailed, weekly, for over three decades years, as artist Matt Brooker (who goes by the psuedonym D’Israeli) points out:

Working on Lowlife, with its Mega-City One setting freed from the presence of Judge Dredd, I found myself thinking about the city and its place in the Dredd/2000AD franchise. And it occurred to me that, really, the city is the actual star of Judge Dredd. I mean, Dredd himself is a man of limited attributes and predictable reactions. His value is giving us a fixed point, a window through which to explore the endless fountain of new phenomena that is the Mega-City. It’s the Mega-City that powers Judge Dredd, and Judge Dredd that has powered 2000AD for the last 30 years.

Brooker, from his keen-eyed-viewpoint as someone currently illustrating MC-1, examines the differing visions that artists like Carlos Ezquerra and Mike McMahon have brought to the city over the years in a wonderful blogpost which I heartily recommend you read [http://disraeli-demon.blogspot.com/2009/04/lowlife-creation-part-five-all-joy-i.html]

Were Mega-City One’s creators influenced by Archigram or other radical architects?

I’d venture a “yes” on that.

Mike McMahon, seen to many, including Brooker and myself as one of the definitive portrayals of The Big Meg renders the giant, town-within-a-city Blocks as “pepperpots” organic forms reminiscent of Ken Yeang (pictured here), or (former Rogers-collaborator) Renzo Piano’s “green skyscrapers”.

While I’m unsure of the claim that MC-1 can trace it’s lineage back to radical 60’s architecture, it seems that the influence flowing the other direction, from comicbook to architect, is far clearer.

Here in the UK, the Architect’s Journal went as far as to name it the number one comic book city [http://www.architectsjournal.co.uk/story.aspx?storyCode=5204830]

Echoing Brooker’s thoughts, they exclaim:

“Mega City One is the ultimate comic book city: bigger, badder, and more spectacular than its rivals. It’s underlying design principle is simple – exaggeration – which actually lends it a coherence and character unlike any other. While Batman’s Gotham City and Superman’s Metropolis largely reflect the character of the superheroes who inhabit them (Gotham is grim, Metropolis shines) Mega City One presents an exuberant, absurd foil to Dredd’s rigid, monotonous outlook.”

Back in our world, the exaggerated mega-city is going through a bit of bad patch.

The bling’d up ultraskyscraping and bespoke island-terraforming of Dubai is on hold until capitalism reboots, and changes in political fortune have nixed the futuristic, ubicomp’d-up Arup-designed ecotopia of Dongtan [http://en.wikipedia.org/wiki/Dongtan] in China.

But, these are but speedbumps on the road to the future city.

There are still ongoing efforts to create planned, model future cities such as one that Nick Durrant of design consultancy Plot is working on in Abu Dhabi: Masdar City [http://en.wikipedia.org/wiki/Masdar_City] It’s designed by another alumni of the British Hi-tech school – Sir Norman Foster. “Zero waste, carbon neutral, car free” is the slogan, and a close eye is being kept on it as a test-bed for clean-tech in cities.

We are now a predominantly urban species, with over 50% of humanity living in a city. The overwhelming majority of these are not old post-industrial world cities such as London or New York, but large chaotic sprawls of the industrialising world such as the “maximum cities” of Mumbai or Guangzhou [http://en.wikipedia.org/wiki/Guangzhou]. Here the infrastructures are layered, ad-hoc, adaptive and personal – people there really are walking architecture, as Archigram said.

Hacking post-industrial cities is becoming a necessity also. The “shrinking cities” project, http://www.shrinkingcities.com, is monitoring the trend in the west toward dwindling futures for cities such as Detroit and Liverpool.

They claim:

“In the 21st century, the historically unique epoch of growth that began with industrialization 200 years ago will come to an end. In particular, climate change, dwindling fossil sources of energy, demographic aging, and rationalization in the service industry will lead to new forms of urban shrinking and a marked increase in the number of shrinking cities.”

However, I’m optimistic about the future of cities. I’d contend cities are not just engines of invention in stories, they themselves are powerful engines of culture and re-invention.

David Byrne in the WSJ [http://is.gd/3q1Ca] as quoted by entrepreneur and co-founder of Flickr, Caterina Fake [http://caterina.net/] on her weblog recently:

“A city can’t be too small. Size guarantees anonymity—if you make an embarrassing mistake in a large city, and it’s not on the cover of the Post, you can probably try again. The generous attitude towards failure that big cities afford is invaluable—it’s how things get created. In a small town everyone knows about your failures, so you are more careful about what you might attempt.”

Patron saint of cities, Jane Jacobs, in her book “The Economy of Cities” put forward the ‘engines of invention’ argument in her theory of ‘import replacement’:

“…when a city begins to locally produce goods which it formerly imported, e.g., Tokyo bicycle factories replacing Tokyo bicycle importers in the 1800s. Jacobs claims that import replacement builds up local infrastructure, skills, and production. Jacobs also claims that the increased produce is exported to other cities, giving those other cities a new opportunity to engage in import replacement, thus producing a positive cycle of growth.”

Urban computing and gaming specialist, founder of Area/Code and ITP professor Kevin Slavin showed me a presentation by architect Dan Pitera about the scale and future of Detroit, and associated scenarios by city planners that would see the shrinking city deliberately intensify – creating urban farming zones from derelict areas so that it can feed itself locally. Import replacement writ large.

He also told me that 400 cities worldwide independently of their ‘host country’ agreed to follow the Kyoto protocol. Cities are entities that network outside of nations as their wealth often exceeds that of the rest of the nation put together – it’s natural they solve transnational, global problems.

Which leads me back to science-fiction. Warren Ellis created a character called Jack Hawksmoor in his superhero comic series The Authority.

The surname is a nice nod toward psychogeography and city-fans: Hawksmoor was an architect and progeny of Sir Christopher Wren, fictionalised into a murderous semi-mystical figure who shaped the city into a giant magical apparatus by Peter Ackroyd in an eponymous novel.

Ellis’ Hawksmoor however was abucted multiple times, seemingly by aliens, and surgically adapted to be ultimately-suited to live in cities – they speak to him and he gains nourishment from them. If you’ll excuse the spoiler, the zenith of Hawksmoor’s adventures with cities come when he finds the purpose behind the modifications – he was not altered by aliens but by future-humans in order to defend the early 21st century against a time-travelling 73rd century Cleveland gone berserk. Hawksmoor defeats the giant, monstrous sentient city by wrapping himself in Tokyo to form a massive concrete battlesuit.

Cities are the best battlesuits we have.

It seem to me that as we better learn how to design, use and live in cities – we all have a future.


Vibe-designing

Figma feels (to me) like one of those product design empathy experiences where you’re made to wear welding gloves to use household appliances.

I appreciate its very good for rapidly constructing utilitarian interfaces with extremely systemic approaches.

I just sometimes find myself staring at it (and/or swearing at it) when I mistakenly think of it as a tool for expression.

Currently I find myself in a role where I work mostly with people who are extremely good and fast at creating in Figma.

I am really not.

However, I have found that I can slowly tinker my way into translating my thoughts into Figma.

I just can’t think in or with Figma.

Currently there’s discussion of ‘vibe coding’ – that is, using LLMs to create code by iterating with prompts, quickly producing workable prototypes, then finessing them toward an end.

I’ve found myself ‘vibe designing’ in the last few months – thinking and outlining with pencil, pen and paper or (mostly physical) whiteboard as has been my habit for about 30 years, but with interludes of working with Claude (mainly) to create vignettes of interface, motion and interaction that I can pin onto the larger picture akin to a material sample on a mood board.

Where in the past 30 years I might have had to cajole a more technically adept colleague into making something through sketches, gesticulating and making sound effects – I open up a Claude window and start what-iffing.

It’s fast, cheap and my more technically-adept colleagues can get on with something important while I go down a (perhaps fruitless) rabbit hole of trying to make a micro-interaction feel like something from a triple-AAA game.

The “vibe” part of the equation often defaults to the mean, which is not a surprise when you think about what you’re asking to help is a staggeringly-massive machine for producing generally-unsurprising satisfactory answers quickly. So, you look at the output as a basis for the next sketch, and the next sketch and quickly, together, you move to something more novel as a result.

Inevitably (or for now, if you believe the AI design thought-leadering that tools like replit, lovable, V0 etc will kill it) I hit the translate-into-Figma brick wall at some point, but in general I have a better boundary object to talk with other designers, product folk and engineers if my Figma skills don’t cut it to describe what I’m trying to describe.

Of course, being of a certain vintage, I can’t help but wonder that sometimes the colleague-cajoling was the design process, and I’m missing out on the human what-iffing until later in the process.

I miss that, much as I miss being in a studio – but apart from rarefied exceptions that seems to be gone.

Vibe designing is turn-based single-player, for now… which brings me back to the day job…

100 years of The Shipping Forecast, 15 years of making things celebrating it…

Shipping Forecast Rosary

The Shipping Forecast is celebrating its centenary.

I’ve always loved this very British “accidental slice of nonsense poetry”

About 15 years ago, I did a little project to make a ‘rosary’ for what is often referred to as a ‘secular prayer’.

Shipping Forecast Rosary – South East Iceland

Each of the forecast’s regions are represented in laser-cut marine plywood, and strung – in the order of the broadcast – to be thumbed through as you listen.

Shipping Forecast Rosary – German Bight

It was a very quick idea but I’ve always loved it – and it seemed to resonate with a few folks.

Perhaps I’ll re-issue it for the centenary over at http://magpieprint.works?

I also did another quick Shipping Forecast inspired piece for a newspaper the now-defunct cycling brand Vulpine put out back in 2013.

This attempted to create alternate coastlines from the shipping forecast areas.

I’m less happy with the execution here, but it’s still a fun idea. Might be more satisfying as something playable, generative – perhaps it has a future as a code experiment with an LLM’s assistance…

However – my favourite Shipping Forecast associated project is Matt Brown’s beautifully simple and evocative typography piece that still has pride of place on the wall.