“Back to BASAAP” – My talk at ThingsCon 2025 in Amsterdam

Last Friday I had the pleasure of speaking at ThingsCon in Amsterdam, invited by Iskander Smit to join a day exploring this year’s theme of ‘resize/remix/regen’.

The conference took place at CRCL Park on the Marineterrein – a former naval yard that’s spent the last 350 years behind walls, first as the Dutch East India Company’s shipbuilding site (they launched Michiel de Ruyter‘s fleet from here in 1655), then as a sealed military base.

Since 2015 it’s been gradually opening up as an experimental district for urban innovation, with the kind of adaptive reuse that gives a place genuine character.

The opening keynote from Ling Tan and Usman Haque set a thoughtful and positive tone, and the whole day had an unusual quality – intimate scale, genuinely interactive workshops, student projects that weren’t just pinned to walls but actively part of the conversation. The kind of creative energy that comes from people actually making things rather than just talking about making things.

My talk was titled “Back to BASAAP” – a callback to work at BERG, threading through 15-20 years of experiments with machine intelligence.

The core argument (which I’ve made in the Netherlands before…): we’ve spent too much time trying to make AI interfaces look and behave like humans, when the more interesting possibilities lie in going beyond anthropomorphic metaphors entirely.

What happens when we stop asking “how do we make this feel like talking to a person?” and start asking “what new kinds of interaction become possible when we’re working with a machine intelligence?”

I try i the talk to update my thinking here with the contemporary signals around more-than human design and also more-than-LLM approaches to AI, namely so-called “World Models”.

What follows are the slides with my speaker notes – the expanded version of what I said on the day, with the connective tissue that doesn’t make it into the deck itself.

One of the nice things about going last is that you can adjust your talk and slides to include themes and work you’ve seen throughout the day – and I was particularly inspired by Ling Tan and Usman Haque’s opening keynote.

Thanks to Iskander and the whole ThingsCon team for the invitation, and to everyone who came up afterwards with questions, provocations, and adjacent projects I need to look at properly.



Hi I’m Matt – I’m a designer who studied architecture 30 years ago, then got distracted.

Around 20 years ago I met a bunch of folks in this room, and also started working on connected objects, machine intelligence and other things… Iskander asked me to talk a little bit about that!

I feel like I am in a safe space here, so imagine many of you are like me and have a drawer like this, or even a brain like this… so hopefully this talk is going to have some connections that will be useful one day!

So with that said, back to BERG.

We were messing around with ML, especially machine vision – very early stuff – e.g. this experiment we did in the studio with Matt Biddulph to try and instrument the room, and find patterns of collaboration and space usage.

And at BERG we tended to have some recurring themes that we would resize and remix throughout out work.

BASAAP was one.

BASAAP is an acronym for Be As Smart As A Puppy – which actually I think first popped into my head while at Nokia a few years earlier.

It alludes to this quote from MIT roboticist and AI curmudgeon Rodney Brooks who said if we get the smartest folks together for 50 years to work on AI we’ll be lucky if we can make it as a smart as a puppy.


I guess back then we thought that puppy-like technologies in our homes sounded great!

We wanted to build those.

Also it felt like all the energy and effort to make technology human was kind of a waste.

We thought maybe you could find more delightful things on the non-human side of the uncanny valley…

And implicit in that I guess was a critique of the mainstream tech drive at the time (around the earliest days of Siri, Google Assistant) around voice interfaces, which was a dominant dream.

A Google VP at the time stated that their goal was to create ‘the star trek computer’.

Our clients really wanted things like this, and we had to point out that voice UIs are great for moving the plot of tv shows along.


I only recently (via the excellent Futurish podcast) learned this term – ‘hyperstition’ – a self-fulfilling idea that becomes real through its own existence (usually in movies or other fictions) e.g. flying cars

And I’d argue we need to be critically aware of them still in our work…

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

Whatever your position on them, LLMs are in a hyperstitial loop right now of epic proportions.

Disclaimer: I’ve worked on them, I use them. I still try and think critically of them as material

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

And while it can feel like we have crossed the uncanny valley there, I think we can still look to the BASAAP thought to see if there’s other paths we can take with these technologies.

https://whatisintelligence.antikythera.org/

My old boss at Google, Blaise Agüera y Arcas has just published this fascinating book on the evolutionary and computational basis of intelligence.

In it he frames our current moment as the start of a ‘symbiosis’ of machine and human intelligence, much as we can see other systems of natural/artificial intelligences in our past – like farming, cities, economies.

There’s so much in there – but this line from an accompanying essay in Nature brings me back to BASAAP. “Their strengths and weaknesses are certainly different from ours” – so why as designers aren’t we exposing that more honestly?


In work I did in Blaise’s group at Google in 2018 we examined some ways to approach this – by explicitly surfacing an AI’s level of confidence in the UX.

Here’s a little mock up of some work with Nord Projects from that time where we imagined dynamic UI that was built by the agent to surface it’s uncertainties to its user – and right up to date – papers published at the launch of Gemini 3 where the promise of generated UI could start to support stuff like that.


And just yesterday this new experimental browser ‘Disco’ was announced by Google Labs – that builds mini-apps based on what it thinks you’re trying to achieve…


But again lets return to that thought about Machine Intelligence having a symbiosis with the human rather than mimicking it…


There could be more useful prompts from the non-human side of the uncanny valley… e.g. Spiders

I came across this piece in Quanta https://www.quantamagazine.org/the-thoughts-of-a-spiderweb-20170523/ some years back about cognitive science experiments on spiders revealing that their webs are part of their ‘cognitive equipment’. The last paragraph struck home – ‘cognition to be a property of integrated nonbiological components’

And… of course…

In Peter Godfrey-Smith’s wonderful book he explores different models of cognition and consciousness through the lens of the octopus.

What I find fascinating is the distributed, embodied (rather than centralized) model of cognition they appear to have – with most of their ‘brains’ being in their tentacles…

I have always found this quote from ETH’s Bertrand Meyer inspiring… No need for ‘brains’!!!


“H is for Hawk” is a fantastic memoir of the relationship between someone and their companion species. Helen McDonald writes beautifully about the ‘her that is not her’

(footnote: I experimented with search-and-replace in her book here back in 2016: https://magicalnihilism.com/2016/06/08/h-is-for-hawk-mi-is-for-machine-intelligence/)

This is CavCam and CavStudio – more work by Nord Projects, with Alison Lentz, Alice Moloney and others in Google Research examining how these personalised trained models could become intelligent reactive ‘lenses’ for creative photography.

We could use AI in creating different complimentary ‘umwelts’ for us.


I’m sure many of you are familiar with Thomas Nagel’s 1974 piece – ‘What is it like to be a bat” – well, what if we can know that?

BAAAAT!!!

This ‘more than human’ approach to design is evident in the room and the zeitgeist for some time now.

We saw it beautifully in Ling Tan and Usman Haque’s work and practice this morning, and of course it’s been wonderfully examined in James Bridle’s writing and practice too.

Perhaps surprising is that the tech world is heading there too perhaps.

There’s a growing suspicion among AI researchers – voiced at their big event NeurIPS just a weekor so – that the language model will need to be supplanted or at least complemented by other more embodied and physical approaches, including what are getting categorised as “World Models” – playing in the background is video from Google Deepmind’s announcement of the autumn on this agenda.

Fei-Fei Li (one of the godmothers of the current AI boom) has a recent essay on substack exploring this.

“Spatial Intelligence is the scaffolding upon which our cognition is built. It’s at work when we passively observe or actively seek to create. It drives our reasoning and planning, even on the most abstract topics. And it’s essential to the way we interact—verbally or physically, with our peers or with the environment itself.”

Here are some old friends from Google who have started a company – Archetype AI – looking at physical world AI models that are built up from a multiplicity of real-time sensor data…

As they mention the electrical grid – here’s some work from my time at solar/battery company Lunar Energy in 2022/23 that can illustrate the potential for such approaches.

In Japan, Lunar have a large fleet of batteries controlled by their Lunar AI platform. You can perhaps see in the time-series plot the battery sites ‘anticipating’ the approach of the typhoon and making sure they are charged to provide effective backup to the grid.

Together with my old BERG colleague Tom Armitage some experiments at Lunar to bring these network behaviours to life with sound and data visualisations.

Maybe this is… “What is it like to be a BAT-tery’.

Sorry…


I think we might have had our own little moment of sonic hyperstition there…

So, to wrap up.

This summer I had an experience I have never had before.

I was caught in a wildfire.

I could see it on a map from space, with ML world models detecting it – but also with my eyes, 500m from me.

I got out – driving through the flames.

But it was probably the most terrifying thing that has ever happened to me… I was lucky. I was a tourist. I didn’t have to keep living there.

But as Ling and Usman pointed out – we are in a world now where these types of experiences are going to become more and more common.

And as they said – the only way out is through.

This is an Iceye Gen4 Synthetic Aperture Radar satellite – designed and built in Europe.

Here’s some imagery from this past week they released of how they’ve been helping emergency response to the flooding in SE Asia – e.g. Sri Lanka here with real-time imaging.

But as Ling said this morning – we can know more and more but it might not unlock the regenerative responses we need on its own.

How might we follow their example with these new powerful world modelling technologies?

As well as Ling and Usman’s work, responses like the ‘Resonant Computing’ manifesto (disclosure: I’m a co-signee/supporter) and the ‘planetary sapience’ visions of the Antikythera organisation give me hope we can direct these technologies to resize, remix and regenerate our lives and the living systems of the planet.

The AI-assisted permaculture future depicted in Ruthana Emrys’ “A Half-Built Garden” gives me hope.

The rise of bioregional design as championed by the Future Observatory at the Design Museum in London gives me hope.

And I’ll leave you with the symbiotic nature/AI hope of my friends at Superflux and their project that asks, I guess – “What is it like to be a river?”…

https://superflux.in/index.php/work/nobody-told-me-rivers-dream/#

THANK YOU.

Speaking at ThingsCon 25 in Amsterdam: “Back to BASAAP”

I’ll be giving the closing keynote at ThingsCon 2025 in Amsterdam on Friday December 12th.

Thank you Iskander Smit and crew for inviting me!

The working title I gave them: “Back to BASAAP

BASAAP stands for “Be As Smart As A Puppy” – something I originally wrote on a (physical) post it note back when I worked at Nokia in 2005 or 2006. For the past 20 years I’ve been exploring metaphors and experiences that might arise from the technology we call ‘AI’ – and while a lot of us now talk to LLMs every day – they still might not… B… ASAAP…”

Very pleased that I’ll be sharing the stage with old friends like James Bridle, Usman Haque and Kars Alfrink – and meeting new ones I’m sure.

The theme and line-up looks great – hope to see you there!

A year at Miro

I joined Miro a year ago this week, back in November 2024.

In my first few weeks I wrote down and shared with the team a few assumptions / goals / thoughts / biases / priors as a kind of pseudo-manifesto for how I thought we might proceed with AI in Miro, and I thought I’d dust them off.

About a month ago we released a bunch of AI features that the team did some amazing work on, and will continue to improve and iterate upon.

If I squint I can maybe see some of this in there, but of course a) it takes a village and b) a lot changed in both the world of AI and Miro in the course of 2025.

My contribution to what made it out there was ultimately pretty minimal, and all kudos for the stellar work that got launched go to Tilo Krueger, Roana Bilia, Mauricio Wolff, Sophia Omarji, Jai McKenzie, Shreya Bhardwaj, Ard Blok Ahmed Genaidy, Ben Shih, Robert Kortenoeven, Andy Cullen, Feena O’Sullivan, Anna Gadomska, George Radu, Rune Schou, Kelly Dorsey, Maiko Senda and many many more brilliant design, product and engineering colleagues at Miro.

Anyway – FWIW I thought it would still be fun to post what I thought I year ago, as there might be something useful still there, accompanied by some slightly-odd Sub-Gondry stylings from Veo3


Multiplayer / Multispecies

When we are building AI for Miro always bear in mind the human-centred team nature of innovation and making complex project work. Multiplayer scenarios are always the start of how we consider AI processes, and the special sauce of how we are different to other AI tools.


Minds on the Map

The canvas is a distinct advantage for creating an innovation workspace – the visibility and context than can be given to human team members should extend to the AI processes that can be brought to bear on it. They should use all the information created by human team members on the canvas in their work.


Help both Backstage & On-Stage

Work moves fluidly from unstructured and structured modes, asynchronous and synchronous, solo and team work – and there are aspects of preparation and performance to all of these. AI processes should work fluidly across all of them.


AI is always Non-Destructive

All AI processes aim to preserve and prioritise work done by human teams.


AI gets a Pencil, Humans get a Pen

Anything created by an AI process (initially) has a distinct visual/experiential identity so that human team members can identify it quickly.


No Teleporting

Don’t teleport users to a conclusion.

Where possible, expose the ‘chain of thought’ that the AI process so that users can understand how it arrived at the output, and edit/iterate on it.


AIs leave visible (actionable) evidence

Where possible, expose the AI processes’ ‘chain of thought’ on the board so that users can understand how it arrived at the output, and edit/iterate on it. Give hooks into this for integrations, and make context is well logged in versions/histories.


eBikes for the Mind

Humans always steer and control – but AI processes can accelerate and compress the distances travelled. They are mostly ‘pedal-assistance’ rather than self-driving.


Help do the work of the work

What are the AI processes that can accelerate or automate the work around the work e.g. taking notes, scheduling, follows ups, organising, coordinating: so that the human team mates can get on with the things they do best.


Using Miro to use Miro

Eventually, AI processes in Miro extend in competence to instigate and initiate work in teams in Miro. This could have its roots in composable workflows and intelligent templates, but extend to assembling/convening/facilitating significant amounts of multiplayer/multispecies work on an indvidual’s behalf.


My Miro AI

What memory / context can I count on to bring to my work, that my agents or my team can use. How can I count on my agents not to start from scratch each time? Can I have projects I am working on with my agents over time? Are my agents ‘mine’? Can I bring my own AI, visualise and control other AI tools in Miro or export the work of Miro agents to other tools, or take it with me when I move teams/jobs (within reason). Do my agents have resumes?


The City is (still) a battlesuit for surviving the future.

Just watched Sir Norman Foster present at the World Design Congress in London, on cities and urbanism as a defence against climate change.

This excellent image visualises household carbon footprints – highlighting in coincidental green the extreme efficiency of NYC compared to the surrounding suburban sprawl of the emerging BAMA.

Sir Norman Foster presenting at the World Design Congress in London, discussing urbanism and climate change while a colorful map of household carbon footprints in New York City is displayed.

16 years ago this September, while at BERG, I wrote a piece at the invitation of Annalee Newitz for a science fiction focussed blog called io9 called “The City is a battlesuit for surviving the future.

It’s still there: bit-rotted, battered, and oozing dangerously-outdated memetic fluids, like a Mark 1 Jaeger.

Bruce Sterling was (obliquely) very nice about it at the time, and lots of other folks wrote interesting (and far-better written) rebuttals.

I thought as it’s a 16 year old now, I should check in on it, with some distance, and give it a new home here.

I thankfully found my original non-edited google doc that I shared with Annalee, and it’s pasted below…

My friend Nick Foster is giving the closing keynote at the event Sir Norman spoke at tomorrow. He just wrote an excellent book on our attitudes to thinking about futures called “Could Should Might Don’t” – which I heartily recommend.

My little piece of amateur futurism from 2009 has a dose of all four – but for the reasons Sir Norman pointed out, I think it’s still a ‘Could’.

And… Still a ‘Should’.

The City is (still) a battlesuit for surviving the future.

Now, 16 yrs later, we ‘Might‘ build it up from Kardashev Streets


[The following is my unedited submission to io9.com, published 20th September 2009]

The city is a battlesuit for surviving the future.

Looking at the connections between architects and science-fiction’s visions of future cities

In February of this year I gave a talk at webstock in New Zealand, entitled “The Demon-Haunted World” – which investigated past visions of future cities in order to reflect upon work being done currently in the field of ‘urban computing’.

In particular I examined the radical work of influential 60’s architecture collective Archigram, who I found through my research had coined the term ‘social software’ back in 1972, 30 years before it was on the lips of Clay Shirky and other internet gurus.

Rather than building, Archigram were perhaps proto-bloggers – publishing a sought-after ‘magazine’ of images, collage, essays and provocations regularly through the 60s which had an enormous impact on architecture and design around the world, right through to the present day. Archigram have featured before on io9 [http://io9.com/5157087/a-city-that-walks-on-giant-actuators], and I’m sure they will again.

Archigram's "Walking City" Project: An artistic depiction of a futuristic city designed to be mobile, with mechanical elements and skyscrapers in the background, representing a conceptual vision of urban living.

They referenced comics – American superhero aesthetics but also the stiff-upper-lips and cut-away precision engineering of Frank Hampson’s Dan Dare and Eagle, alongside pop-music, psychedelia, computing and pulp sci-fi and put it in a blender with a healthy dollop of Brit-eccentricity. They are perhaps most familiar from science-fictional images like their Walking City project, but at the centre of their work was a concern with cities as systems, reflecting the contemporary vogue for cybernetics and belief in automation.

Exterior view of the Pompidou Centre in Paris, showcasing its unique architectural design with exposed structural elements and colorful escalators.

Although Archigram didn’t build their visions, other architects brought aspects of them into the world. Echoes of their “Plug-in city” can undoubtedly be seen in Renzo Piano and Richard Rogers’ Pompidou Centre in Paris.

Much of the ‘hi-tech’ style of architecture (chiefly executed by British architects such as Rogers, Norman Foster and Nicholas Grimshaw) popular for corporate HQs and arts centers through the 80s and 90s can be traced back to, if not Archigram, then the same set of pop sci-fi influences that a generation of British schoolboys grew up with – into world-class architects.

Lord Rogers, as he now is, has made a second career of writing and lobbying about the future of cities worldwide. His books “Cities for a small planet” and “Cities for a small country” were based on work his architecture and urban-design practice did during the 80s and 90s, consulting on citymaking and redevelopment with national and regional governments. His work for Shanghai is heavily featured in ‘small planet’ – a plan that proposed the creation of an ecotopian mega city. This was thwarted, but he continues to campaign for renewed approaches to urban living.

Colorful graphic featuring a futuristic city skyline with the text 'People Are Walking Architecture' prominently displayed. The design includes abstract shapes and visual elements associated with urban architecture and the concept of people as integral to city structures.

Last year I saw him give a talk in London where he described the near-future of cities as one increasingly influenced by telecommunications and technology. He stated that “our cities are increasingly linked and learning” – this seemed to me a recapitulation of Archigram’s strategies, playing out not through giant walking cities but smaller, bottom-up technological interventions. The infrastructures we assemble and carry with us through the city – mobile phones, wireless nodes, computing power, sensor platforms are changing how we interact with it and how it interacts with other places on the planet. After all it was Archigram who said “people are walking architecture”

Dan Hill (a consultant on how digital technology is changing cities for global engineering group Arup) in his epic blog post “The Street as Platform” [http://www.cityofsound.com/blog/2008/02/the-street-as-p.html] says “…the way the street feels may soon be defined by what cannot be seen by the naked eye”.

He goes on to explain:

“We can’t see how the street is immersed in a twitching, pulsing cloud of data. This is over and above the well-established electromagnetic radiation, crackles of static, radio waves conveying radio and television broadcasts in digital and analogue forms, police voice traffic.  This is a new kind of data, collective and individual, aggregated and discrete, open and closed, constantly logging impossibly detailed patterns of behaviour. The behaviour of the street.”

Adam Greenfield, a design director at Nokia wrote one of the defining texts on the design and use of ubiquitous computing or ‘ubicomp’ called “Everyware” [http://www.studies-observations.com/everyware/] and is about to release a follow-up on urban environments and technology called “The city is here for you to use”.

In a recent talk he framed a number of ways in which the access to data about your surroundings that Hill describes will change our attitude towards the city. He posits that we will move from a city we browser and wander to a ‘searchable, query-able’ city that we can not only read, but write-to as a medium.

He states

“The bottom-line is a city that responds to the behaviour of its users in something close to real-time,  and in turn begins to shape that behaviour”

Again, we’re not so far away from what Archigram were examining in the 60’s. Behaviour and information as the raw material to design cities with as much as steel, glass and concrete.

The city of the future increases its role as an actor in our lives, affecting our lives.

This of course, is a recurrent theme in science-fiction and fantasy. In movies, it’s hard to get past the paradigm-defining dystopic backdrop of the city in Bladerunner, or the fin-de-siècle late-capitalism cage of the nameless, anonymous, bounded city of the Matrix.

Perhaps more resonant of the future described by Greenfield is the ever-changing stage-set of Alex Proyas’ “Dark City”.

For some of the greatest-city-as-actor stories though, it’s perhaps no surprise that we have to turn to comics as Archigram did – and the eponymous city of Warren Ellis and Darrick Robertson’s Transmetropolitan as documented and half-destroyed by gonzo future journalist-messiah Spider Jerusalem.

Transmet’s city binds together perfectly a number of future-city fiction’s favourite themes: overwhelming size (reminiscent of the BAMA, or “Boston-Atlanta Metropolitan Axis from William Gibson’s “Sprawl” trilogy),  patchworks of ‘cultural reservations’ (Stephenson’s Snowcrash with it’s three-ring-binder-governed, franchise-run-statelets) and a constant unrelenting future-shock as everyday as the weather… For which we can look to the comics-futrue-city grand-daddy of them all: Mega-City-1.

Ah – The Big Meg, where at any moment on the mile-high Zipstrips you might be flattened by a rogue Boinger, set-upon by a Futsie and thrown down onto the skedways far below, offered an illicit bag of umpty-candy or stookie-glands and find yourself instantly at the mercy of the Judges. If you grew up on 2000AD like me, then your mind is probably now filled with a vivid picture of the biggest toughest, weirdest future city there’s ever been.

This is a future city that has been lovingly-detailed, weekly, for over three decades years, as artist Matt Brooker (who goes by the psuedonym D’Israeli) points out:

Working on Lowlife, with its Mega-City One setting freed from the presence of Judge Dredd, I found myself thinking about the city and its place in the Dredd/2000AD franchise. And it occurred to me that, really, the city is the actual star of Judge Dredd. I mean, Dredd himself is a man of limited attributes and predictable reactions. His value is giving us a fixed point, a window through which to explore the endless fountain of new phenomena that is the Mega-City. It’s the Mega-City that powers Judge Dredd, and Judge Dredd that has powered 2000AD for the last 30 years.

Brooker, from his keen-eyed-viewpoint as someone currently illustrating MC-1, examines the differing visions that artists like Carlos Ezquerra and Mike McMahon have brought to the city over the years in a wonderful blogpost which I heartily recommend you read [http://disraeli-demon.blogspot.com/2009/04/lowlife-creation-part-five-all-joy-i.html]

Were Mega-City One’s creators influenced by Archigram or other radical architects?

I’d venture a “yes” on that.

Mike McMahon, seen to many, including Brooker and myself as one of the definitive portrayals of The Big Meg renders the giant, town-within-a-city Blocks as “pepperpots” organic forms reminiscent of Ken Yeang (pictured here), or (former Rogers-collaborator) Renzo Piano’s “green skyscrapers”.

While I’m unsure of the claim that MC-1 can trace it’s lineage back to radical 60’s architecture, it seems that the influence flowing the other direction, from comicbook to architect, is far clearer.

Here in the UK, the Architect’s Journal went as far as to name it the number one comic book city [http://www.architectsjournal.co.uk/story.aspx?storyCode=5204830]

Echoing Brooker’s thoughts, they exclaim:

“Mega City One is the ultimate comic book city: bigger, badder, and more spectacular than its rivals. It’s underlying design principle is simple – exaggeration – which actually lends it a coherence and character unlike any other. While Batman’s Gotham City and Superman’s Metropolis largely reflect the character of the superheroes who inhabit them (Gotham is grim, Metropolis shines) Mega City One presents an exuberant, absurd foil to Dredd’s rigid, monotonous outlook.”

Back in our world, the exaggerated mega-city is going through a bit of bad patch.

The bling’d up ultraskyscraping and bespoke island-terraforming of Dubai is on hold until capitalism reboots, and changes in political fortune have nixed the futuristic, ubicomp’d-up Arup-designed ecotopia of Dongtan [http://en.wikipedia.org/wiki/Dongtan] in China.

But, these are but speedbumps on the road to the future city.

There are still ongoing efforts to create planned, model future cities such as one that Nick Durrant of design consultancy Plot is working on in Abu Dhabi: Masdar City [http://en.wikipedia.org/wiki/Masdar_City] It’s designed by another alumni of the British Hi-tech school – Sir Norman Foster. “Zero waste, carbon neutral, car free” is the slogan, and a close eye is being kept on it as a test-bed for clean-tech in cities.

We are now a predominantly urban species, with over 50% of humanity living in a city. The overwhelming majority of these are not old post-industrial world cities such as London or New York, but large chaotic sprawls of the industrialising world such as the “maximum cities” of Mumbai or Guangzhou [http://en.wikipedia.org/wiki/Guangzhou]. Here the infrastructures are layered, ad-hoc, adaptive and personal – people there really are walking architecture, as Archigram said.

Hacking post-industrial cities is becoming a necessity also. The “shrinking cities” project, http://www.shrinkingcities.com, is monitoring the trend in the west toward dwindling futures for cities such as Detroit and Liverpool.

They claim:

“In the 21st century, the historically unique epoch of growth that began with industrialization 200 years ago will come to an end. In particular, climate change, dwindling fossil sources of energy, demographic aging, and rationalization in the service industry will lead to new forms of urban shrinking and a marked increase in the number of shrinking cities.”

However, I’m optimistic about the future of cities. I’d contend cities are not just engines of invention in stories, they themselves are powerful engines of culture and re-invention.

David Byrne in the WSJ [http://is.gd/3q1Ca] as quoted by entrepreneur and co-founder of Flickr, Caterina Fake [http://caterina.net/] on her weblog recently:

“A city can’t be too small. Size guarantees anonymity—if you make an embarrassing mistake in a large city, and it’s not on the cover of the Post, you can probably try again. The generous attitude towards failure that big cities afford is invaluable—it’s how things get created. In a small town everyone knows about your failures, so you are more careful about what you might attempt.”

Patron saint of cities, Jane Jacobs, in her book “The Economy of Cities” put forward the ‘engines of invention’ argument in her theory of ‘import replacement’:

“…when a city begins to locally produce goods which it formerly imported, e.g., Tokyo bicycle factories replacing Tokyo bicycle importers in the 1800s. Jacobs claims that import replacement builds up local infrastructure, skills, and production. Jacobs also claims that the increased produce is exported to other cities, giving those other cities a new opportunity to engage in import replacement, thus producing a positive cycle of growth.”

Urban computing and gaming specialist, founder of Area/Code and ITP professor Kevin Slavin showed me a presentation by architect Dan Pitera about the scale and future of Detroit, and associated scenarios by city planners that would see the shrinking city deliberately intensify – creating urban farming zones from derelict areas so that it can feed itself locally. Import replacement writ large.

He also told me that 400 cities worldwide independently of their ‘host country’ agreed to follow the Kyoto protocol. Cities are entities that network outside of nations as their wealth often exceeds that of the rest of the nation put together – it’s natural they solve transnational, global problems.

Which leads me back to science-fiction. Warren Ellis created a character called Jack Hawksmoor in his superhero comic series The Authority.

The surname is a nice nod toward psychogeography and city-fans: Hawksmoor was an architect and progeny of Sir Christopher Wren, fictionalised into a murderous semi-mystical figure who shaped the city into a giant magical apparatus by Peter Ackroyd in an eponymous novel.

Ellis’ Hawksmoor however was abucted multiple times, seemingly by aliens, and surgically adapted to be ultimately-suited to live in cities – they speak to him and he gains nourishment from them. If you’ll excuse the spoiler, the zenith of Hawksmoor’s adventures with cities come when he finds the purpose behind the modifications – he was not altered by aliens but by future-humans in order to defend the early 21st century against a time-travelling 73rd century Cleveland gone berserk. Hawksmoor defeats the giant, monstrous sentient city by wrapping himself in Tokyo to form a massive concrete battlesuit.

Cities are the best battlesuits we have.

It seem to me that as we better learn how to design, use and live in cities – we all have a future.


Vibe-designing

Figma feels (to me) like one of those product design empathy experiences where you’re made to wear welding gloves to use household appliances.

I appreciate its very good for rapidly constructing utilitarian interfaces with extremely systemic approaches.

I just sometimes find myself staring at it (and/or swearing at it) when I mistakenly think of it as a tool for expression.

Currently I find myself in a role where I work mostly with people who are extremely good and fast at creating in Figma.

I am really not.

However, I have found that I can slowly tinker my way into translating my thoughts into Figma.

I just can’t think in or with Figma.

Currently there’s discussion of ‘vibe coding’ – that is, using LLMs to create code by iterating with prompts, quickly producing workable prototypes, then finessing them toward an end.

I’ve found myself ‘vibe designing’ in the last few months – thinking and outlining with pencil, pen and paper or (mostly physical) whiteboard as has been my habit for about 30 years, but with interludes of working with Claude (mainly) to create vignettes of interface, motion and interaction that I can pin onto the larger picture akin to a material sample on a mood board.

Where in the past 30 years I might have had to cajole a more technically adept colleague into making something through sketches, gesticulating and making sound effects – I open up a Claude window and start what-iffing.

It’s fast, cheap and my more technically-adept colleagues can get on with something important while I go down a (perhaps fruitless) rabbit hole of trying to make a micro-interaction feel like something from a triple-AAA game.

The “vibe” part of the equation often defaults to the mean, which is not a surprise when you think about what you’re asking to help is a staggeringly-massive machine for producing generally-unsurprising satisfactory answers quickly. So, you look at the output as a basis for the next sketch, and the next sketch and quickly, together, you move to something more novel as a result.

Inevitably (or for now, if you believe the AI design thought-leadering that tools like replit, lovable, V0 etc will kill it) I hit the translate-into-Figma brick wall at some point, but in general I have a better boundary object to talk with other designers, product folk and engineers if my Figma skills don’t cut it to describe what I’m trying to describe.

Of course, being of a certain vintage, I can’t help but wonder that sometimes the colleague-cajoling was the design process, and I’m missing out on the human what-iffing until later in the process.

I miss that, much as I miss being in a studio – but apart from rarefied exceptions that seems to be gone.

Vibe designing is turn-based single-player, for now… which brings me back to the day job…

Wibble-y-Wobble-y, Pace-y-Wace-y

Was able to get some time this week to catchup with Bryan Boyer.

We talked about some of the work he was doing with his students, particularly challenging them to think about design interventions and prototyping those across the ‘pace layers’ as famously depicted by Stewart Brand in his book “How buildings learn”

The image is totemic for design practitioners and theorists of a certain vintage (although I’m not sure how fully it resonates with today’s digital ‘product’ design / UX/UI generation) and certainly has been something I’ve wielded over the last two decades or so.

I think my first encounter with it would have been around 2002/2003 or so, in my time at Nokia.

I distinctly remember a conference where (perhaps unsurprisingly!) Dan Hill quoted it – I think it was DIS in Cambridge Massachusetts, where I also memorably got driven around one night in a home-brew dune buggy built and piloted (for want of a better term) by Saul Griffith.

For those not familiar with it – here it is. 

The ‘point’ is to show the different cadences of change and progress in different idealised strata of civilisation (perhaps a somewhat narrow WEIRD-ly defined civilisation) – and moreover, much like the slips, schisms and landslides of different geological layers – make the reader aware of the shearing forces and tensions between those layers.

It is a constant presence in the discourse which both leads to it’s dismissal / uncritical acceptance as a cliche. 

But this familiarity, aside from breeding contempt means it is also something quite fun to play in semi-critical ways.

While talking with Bryan, I discussed the biases perhaps embedded in showing ‘fashion’ as a wiggly ‘irrational’ line compared to the other layers. 

What thoughts may come from depicting all the layers as wiggly?

Another thought from our chat was to extend the geological metaphor to the layers.

Geologists and earth scientists often find the most interesting things at the interstices of the layers. Deposits or thin layers that tell a rich tale of the past. Tell-tale indicators of calamity suck as the K–Pg/K-T boundary. Annals of a former world.

The laminar boundary between infrastructure and institutions is perhaps the layer that gets the least examination in our current obsession with “product”…

I’ve often discussed with folks the many situations where infrastructure (capex) is mistaken for something that can replace institutions/labour (opex) – and where the role of service design interventions or strategic design prototypes can help mitigate.

In the pace layers, perhaps we can call that the “Dan Hill Interstitial Latencies Layer” – pleasingly recurrent in its acronymic form (D-HILL) and make it irregular and gnarly to indicate the difficulties there…

The Representational Planar OP-Ex layer (R-POPE) might be another good name, paying homage to the other person I associate with this territory, Richard Pope. I’ve just started reading Richard’s book “Platformland” which I’m sure will have a lot to say about it.

I just finished Deb Chachra’s excellent “How Infrastructure Works”, which while squarely examining the infrastructure pace layer points out the interfaces and interconnections with all the others.

“We might interact with them as individuals but they’re inherently collective, social, and spatial. Because they bring resources to where they’re used, they create enduring relationships not just between the people who share the network but also between those people and place, where they are in the world and the landscape the network traverses. These systems make manifest our ability to cooperate to meet universal needs and care for each other.”

So, perhaps… rather than superficial snark about a design talk cliche, the work of unpacking and making connective tissue across the pace layers might seem more vital in that context.

Household Spirits for Stations: InfoTotems at Brockley Station

I left my job at Lunar Energy last month and August has been about recharging – some holidays with family and also wandering London a bit catching up with folks, seeing some art/design, and generally regenerative flaneur-y.

Yesterday, for instance, was off to lunch with my talented friends at the industrial design firm Approach Studio in Hackney.

This entailed getting the overground, and in doing so found something wonderful at Brockley Station.

Placed along the platform were “InfoTotems” (at least that’s what they were called on the back of them). Sturdy, about 1.5m high and with – crucially in the bright SE London sunlight of August – easily-readable low-power E-Ink screens.

E-ink “InfoTotem” at Brockley Station, SE London

They seemed to function as very simple but effective dynamic way-finding, nudging me down the platform to where it predicted I’d find a less-busy carriage.

Location sensitive, dynamic signage: E-ink “InfoTotem” at Brockley Station, SE London

Wonderfully, when I did so, I got this message on the next InfoTotem.

“You’re in the right place”: contextual reassurance from E-ink “InfoTotem” at
Brockley Station, SE London

Nothing more than that – no extraneous information, just something very simple, reassuring and useful.

It felt really appropriate and thoughtful.

Not overreaching, over promising , overloading with *everything* else this thing could possible do as a software-controlled surface.

Very nice, TfL folks.

E-ink “InfoTotem” at Brockley Station, SE London: Back view, UID…

I’m going to try to do a bit more poking on the provenance of this work, and where it might be heading, as I find it really delightful.

So far, I think the HW itself might be this from a small UK company called Melford Technologies .

It made me recall one of my favourite BERG projects I worked on, “The Journey” which was for Dentsu London – looking at ways to augment the realities of a train journey with light touch digital interventions on media surfaces along the timeline of the experience.

Place-based reassurance: Sketch for “The Journey” work with BERG for Dentsu London

Place-based reassurance: E-Ink magnetic-backed dynamic signage. Still from “The Journey” work with BERG for Dentsu London

I think what I like about the InfoTotems – is that instead of a singular product doing a thing on the platform, it’s treated as a spatial experience between the HW surfaces, and as a result it feels like a service inhabiting the place, rather than just the product.

Without that overloading I was referring to, what else could they do?

Obviously this example of nudging me down the platform to a less-busy carriage is based on telemetry it’s received from the arriving train.

Could there be more that conveys the spirit of the place – observations or useful nuggets – that are connected to where you are temporarily, but where the totems sit more permanently.

In “The Journey” there’s a lovely short bit where Jack is travelling through the UK countryside and looks at a ticket that has been printed for him, a kind of low-res augmented reality.

It’s a prompt for him to look out the window to notice something, knowing where he’s sitting and what time he’s going to go past a landmark.

Could low-powered edge AI start do something akin to this? To build out context or connections between observations made about the surroundings?

Cyclist counter sign in Manchester UK, Image via road.cc

We’ve all seen signs that count – for example ‘water bottles filled’ or ‘bike riders using this route today’ – but an edge AI could perhaps do something more lyrical, or again use the multiple positioned screens along the platform to tell a more serialised, unique story.

Maybe something like Matt W’s Poem/1 e-ink clock – with industrial design from Approach Studio, coincidentally!

Maybe it has a memory of place, a journal. It would need some delicate, sensitive, playful non-creepy design – as well as technological underpinnings ie. Privacy preserving sensing and edge-AI.

I recall Matt Webb also working with Hoxton Analytics who were pursuing stuff in this space to create non-invasive sensing of things like traffic and footfall in commercial space.

In terms of edge AI that starts to relate to the spatial world, I’m tracking the work of friends who have started Archetype.ai to look at just that. I need to delve not it and understand it more.

Perhaps it would then also need something like the work that Patrick Keenan and others did back at Sidewalk Labs to create a typology of sensing in public places.

“A visual language to demystify the tech in cities.” – Patrick Keenan et al, Sidewalk Labs, c2019

Of course the danger is once we start covering it in these icons of disclosure, and doing more and more mysterious things with our totems, we lose the calm ‘just enough internet’ approach that I love so much about this current iteration.

Maybe they’re just right as they are – and I should listen to them…

“Magic notebooks, not magic girlfriends”

The take-o-sphere is awash with responses to last week’s WWDC, and the announcement of “Apple Intelligence”.

My old friend and colleague Matt Webb’s is one of my favourites, needless to say – and I’m keen to try it, naturally.

I could bang on about it of course, but I won’t – because I guess I have already.

Of course, the concept is the easy bit.

Having a trillion-dollar corporation actually then make it, especially when it’s counter to their existing business model is another thing.

I’ll just leave this here from about 6 years ago…

BUT!

What I do want to talk about is the iPad calculator announcement that preceded the broader AI news.

As a fan of Bret Victor, this made me very happy.

As a fan of Seymour Papert it made me very happy.

As a fan of Alan Kay and the original vision of the Dynabook is made me very happy.

But moreover – as someone who has never been that excited by the chatbot/voice obsessions of BigTech, it was wonderful to see.

Of course the proof of this pudding will be in the using, but the notion of a real-time magic notebook where the medium is an intelligent canvas responding as an ‘intelligence amplifier‘ is much more exciting to me than most of the currently hyped visions of generative AI.

I was particularly intrigued to see the more diagrammatic example below, which seemed to belong in the conceptual space between Bret Victor’s Dynamicland and Papert’s Mathland.

I recall when I read Papert’s “Mindstorms” (back in 2012 it seems? ) I got retroactively angry about how I had been taught mathematics.

The ideas he advances for learning maths through play, embodiment and experimentation made me sad that I had not had the chance to experience the subject through those lenses, but instead through rote learning leading to my rejection of it until much later in life.

As he says “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful.”

Perhaps most famously he writes:

“a computer can be generalized to a view of learning mathematics in “Mathland”; that is to say, in a context which is to learning mathematics what living in France is to learning French.”

Play, embodiment, experimentation – supported by AI – not *done* for you by AI.

I mean, I’m clearly biased.

I’ve long thought the assistant model should be considered harmful. Perhaps the Apple approach announced at WWDC means it might not be the only game in town for much longer.

Back at Google I was pursuing concepts of Personal AI with something called Project Lyra, which perhaps one day I can go into a bit more deeply.

Anyway.

Early on Jess Holbrook turned me onto the work of Professor Andy Clark, and I thought I’d try and get to work with him on this.

My first email to him had the subject line of this blog post: “Magic notebooks, not magic girlfriends” – which I think must have intrigued him enough to respond.

This, in turn, led to the fantastic experience of meeting up with him a few times while he was based in Edinburgh and having him write a series of brilliant pieces (for internal consumption only, sadly) on what truly personal AI might mean through his lens of cognitive science and philosophy.

As a tease here’s an appropriate snippet from one of Professor Clark’s essays:

“The idea here (the practical core of many somewhat exotic debates over the ‘extended mind’) is that considered as thinking systems, we humans already are, and will increasingly become, swirling nested ecologies whose boundaries are somewhat fuzzy and shifting. That’s arguably the human condition as it has been for much of our recent history—at least since the emergence of speech and the collaborative construction of complex external symbolic environments involving text and graphics. But emerging technologies—especially personal AI’s—open up new, potentially ever- more-intimate, ways of being cognitively extended.”

I think that’s what I object to, or at least recoil from in the ‘assistant’ model – we’re abandoning exploring loads of really rich, playful ways in which we already think with technology.

Drawing, model making, acting things out in embodied ways.

Back to Papert’s Mindstorms:

“My interest is in the process of invention of “objects-to-think-with,” objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

“…I am interested in stimulating a major change in how things can be. The bottom line for such changes is political. What is happening now is an empirical question. What can happen is a technical question. But what will happen is a political question, depending on social choices.”

The some-what lost futures of Kay, Victor and Papert are now technically realisable.

“what will happen is a political question, depending on social choices.”

The business model is the grid, again.

That is, Apple are toolmakers, at heart – and personal device sellers at the bottom line. They don’t need to maximise attention or capture you as a rent (mostly). That makes personal AI as a ‘thing’ that can be sold much more of viable choice for them of course.

Apple are far freer, well-placed (and of coursse well-resourced) to make “objects-to-think-with, objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”

The wider strategy of “Apple Intelligence” appears to be just that.

But – my hope is the ‘magic notebook’ stance in the new iPad calculator represents the start of exploration in a wider, richer set of choices in how we interact with AI systems.

Let’s see.

Station Identification

There’s been a lot of Her in the news.

Including the assertion that most of the folk who see it as a goal to be emulated in our technologies haven’t watched the end.

The end (which I did watch) if memory serves is where the AIs ‘leave’ to go hang out with the emulated ghost of Alan Watts in the Oort Cloud.

And it’s ok, cos everyone then realises how alienated they’ve been by technofeudalism, and go for a picnic.

Or something.

I was trying to find a talk that Kevin Slavin gave, 16 or so years ago at the Architectural Association – at the launch of the BLDGBLOG book.

I can’t.

But again, if memory serves, it’s epic coda was the machines full of HFT algos ascending, like the end of Her, to a realm of pure lightspeed hyperfinance, uncoupled from the physical world they had been chained to.

Maybe, on a good day, I think the machines, and the people who think like machines will delaminate themselves, and we’ll be left behind – but it’ll be ok, because we’ll have people like Louis Cole.

Doing Louis Cole things.

I mean.