“Back to BASAAP” – My talk at ThingsCon 2025 in Amsterdam

Last Friday I had the pleasure of speaking at ThingsCon in Amsterdam, invited by Iskander Smit to join a day exploring this year’s theme of ‘resize/remix/regen’.

The conference took place at CRCL Park on the Marineterrein – a former naval yard that’s spent the last 350 years behind walls, first as the Dutch East India Company’s shipbuilding site (they launched Michiel de Ruyter‘s fleet from here in 1655), then as a sealed military base.

Since 2015 it’s been gradually opening up as an experimental district for urban innovation, with the kind of adaptive reuse that gives a place genuine character.

The opening keynote from Ling Tan and Usman Haque set a thoughtful and positive tone, and the whole day had an unusual quality – intimate scale, genuinely interactive workshops, student projects that weren’t just pinned to walls but actively part of the conversation. The kind of creative energy that comes from people actually making things rather than just talking about making things.

My talk was titled “Back to BASAAP” – a callback to work at BERG, threading through 15-20 years of experiments with machine intelligence.

The core argument (which I’ve made in the Netherlands before…): we’ve spent too much time trying to make AI interfaces look and behave like humans, when the more interesting possibilities lie in going beyond anthropomorphic metaphors entirely.

What happens when we stop asking “how do we make this feel like talking to a person?” and start asking “what new kinds of interaction become possible when we’re working with a machine intelligence?”

I try i the talk to update my thinking here with the contemporary signals around more-than human design and also more-than-LLM approaches to AI, namely so-called “World Models”.

What follows are the slides with my speaker notes – the expanded version of what I said on the day, with the connective tissue that doesn’t make it into the deck itself.

One of the nice things about going last is that you can adjust your talk and slides to include themes and work you’ve seen throughout the day – and I was particularly inspired by Ling Tan and Usman Haque’s opening keynote.

Thanks to Iskander and the whole ThingsCon team for the invitation, and to everyone who came up afterwards with questions, provocations, and adjacent projects I need to look at properly.



Hi I’m Matt – I’m a designer who studied architecture 30 years ago, then got distracted.

Around 20 years ago I met a bunch of folks in this room, and also started working on connected objects, machine intelligence and other things… Iskander asked me to talk a little bit about that!

I feel like I am in a safe space here, so imagine many of you are like me and have a drawer like this, or even a brain like this… so hopefully this talk is going to have some connections that will be useful one day!

So with that said, back to BERG.

We were messing around with ML, especially machine vision – very early stuff – e.g. this experiment we did in the studio with Matt Biddulph to try and instrument the room, and find patterns of collaboration and space usage.

And at BERG we tended to have some recurring themes that we would resize and remix throughout out work.

BASAAP was one.

BASAAP is an acronym for Be As Smart As A Puppy – which actually I think first popped into my head while at Nokia a few years earlier.

It alludes to this quote from MIT roboticist and AI curmudgeon Rodney Brooks who said if we get the smartest folks together for 50 years to work on AI we’ll be lucky if we can make it as a smart as a puppy.


I guess back then we thought that puppy-like technologies in our homes sounded great!

We wanted to build those.

Also it felt like all the energy and effort to make technology human was kind of a waste.

We thought maybe you could find more delightful things on the non-human side of the uncanny valley…

And implicit in that I guess was a critique of the mainstream tech drive at the time (around the earliest days of Siri, Google Assistant) around voice interfaces, which was a dominant dream.

A Google VP at the time stated that their goal was to create ‘the star trek computer’.

Our clients really wanted things like this, and we had to point out that voice UIs are great for moving the plot of tv shows along.


I only recently (via the excellent Futurish podcast) learned this term – ‘hyperstition’ – a self-fulfilling idea that becomes real through its own existence (usually in movies or other fictions) e.g. flying cars

And I’d argue we need to be critically aware of them still in our work…

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

Whatever your position on them, LLMs are in a hyperstitial loop right now of epic proportions.

Disclaimer: I’ve worked on them, I use them. I still try and think critically of them as material

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

And while it can feel like we have crossed the uncanny valley there, I think we can still look to the BASAAP thought to see if there’s other paths we can take with these technologies.

https://whatisintelligence.antikythera.org/

My old boss at Google, Blaise Agüera y Arcas has just published this fascinating book on the evolutionary and computational basis of intelligence.

In it he frames our current moment as the start of a ‘symbiosis’ of machine and human intelligence, much as we can see other systems of natural/artificial intelligences in our past – like farming, cities, economies.

There’s so much in there – but this line from an accompanying essay in Nature brings me back to BASAAP. “Their strengths and weaknesses are certainly different from ours” – so why as designers aren’t we exposing that more honestly?


In work I did in Blaise’s group at Google in 2018 we examined some ways to approach this – by explicitly surfacing an AI’s level of confidence in the UX.

Here’s a little mock up of some work with Nord Projects from that time where we imagined dynamic UI that was built by the agent to surface it’s uncertainties to its user – and right up to date – papers published at the launch of Gemini 3 where the promise of generated UI could start to support stuff like that.


And just yesterday this new experimental browser ‘Disco’ was announced by Google Labs – that builds mini-apps based on what it thinks you’re trying to achieve…


But again lets return to that thought about Machine Intelligence having a symbiosis with the human rather than mimicking it…


There could be more useful prompts from the non-human side of the uncanny valley… e.g. Spiders

I came across this piece in Quanta https://www.quantamagazine.org/the-thoughts-of-a-spiderweb-20170523/ some years back about cognitive science experiments on spiders revealing that their webs are part of their ‘cognitive equipment’. The last paragraph struck home – ‘cognition to be a property of integrated nonbiological components’

And… of course…

In Peter Godfrey-Smith’s wonderful book he explores different models of cognition and consciousness through the lens of the octopus.

What I find fascinating is the distributed, embodied (rather than centralized) model of cognition they appear to have – with most of their ‘brains’ being in their tentacles…

I have always found this quote from ETH’s Bertrand Meyer inspiring… No need for ‘brains’!!!


“H is for Hawk” is a fantastic memoir of the relationship between someone and their companion species. Helen McDonald writes beautifully about the ‘her that is not her’

(footnote: I experimented with search-and-replace in her book here back in 2016: https://magicalnihilism.com/2016/06/08/h-is-for-hawk-mi-is-for-machine-intelligence/)

This is CavCam and CavStudio – more work by Nord Projects, with Alison Lentz, Alice Moloney and others in Google Research examining how these personalised trained models could become intelligent reactive ‘lenses’ for creative photography.

We could use AI in creating different complimentary ‘umwelts’ for us.


I’m sure many of you are familiar with Thomas Nagel’s 1974 piece – ‘What is it like to be a bat” – well, what if we can know that?

BAAAAT!!!

This ‘more than human’ approach to design is evident in the room and the zeitgeist for some time now.

We saw it beautifully in Ling Tan and Usman Haque’s work and practice this morning, and of course it’s been wonderfully examined in James Bridle’s writing and practice too.

Perhaps surprising is that the tech world is heading there too perhaps.

There’s a growing suspicion among AI researchers – voiced at their big event NeurIPS just a weekor so – that the language model will need to be supplanted or at least complemented by other more embodied and physical approaches, including what are getting categorised as “World Models” – playing in the background is video from Google Deepmind’s announcement of the autumn on this agenda.

Fei-Fei Li (one of the godmothers of the current AI boom) has a recent essay on substack exploring this.

“Spatial Intelligence is the scaffolding upon which our cognition is built. It’s at work when we passively observe or actively seek to create. It drives our reasoning and planning, even on the most abstract topics. And it’s essential to the way we interact—verbally or physically, with our peers or with the environment itself.”

Here are some old friends from Google who have started a company – Archetype AI – looking at physical world AI models that are built up from a multiplicity of real-time sensor data…

As they mention the electrical grid – here’s some work from my time at solar/battery company Lunar Energy in 2022/23 that can illustrate the potential for such approaches.

In Japan, Lunar have a large fleet of batteries controlled by their Lunar AI platform. You can perhaps see in the time-series plot the battery sites ‘anticipating’ the approach of the typhoon and making sure they are charged to provide effective backup to the grid.

Together with my old BERG colleague Tom Armitage some experiments at Lunar to bring these network behaviours to life with sound and data visualisations.

Maybe this is… “What is it like to be a BAT-tery’.

Sorry…


I think we might have had our own little moment of sonic hyperstition there…

So, to wrap up.

This summer I had an experience I have never had before.

I was caught in a wildfire.

I could see it on a map from space, with ML world models detecting it – but also with my eyes, 500m from me.

I got out – driving through the flames.

But it was probably the most terrifying thing that has ever happened to me… I was lucky. I was a tourist. I didn’t have to keep living there.

But as Ling and Usman pointed out – we are in a world now where these types of experiences are going to become more and more common.

And as they said – the only way out is through.

This is an Iceye Gen4 Synthetic Aperture Radar satellite – designed and built in Europe.

Here’s some imagery from this past week they released of how they’ve been helping emergency response to the flooding in SE Asia – e.g. Sri Lanka here with real-time imaging.

But as Ling said this morning – we can know more and more but it might not unlock the regenerative responses we need on its own.

How might we follow their example with these new powerful world modelling technologies?

As well as Ling and Usman’s work, responses like the ‘Resonant Computing’ manifesto (disclosure: I’m a co-signee/supporter) and the ‘planetary sapience’ visions of the Antikythera organisation give me hope we can direct these technologies to resize, remix and regenerate our lives and the living systems of the planet.

The AI-assisted permaculture future depicted in Ruthana Emrys’ “A Half-Built Garden” gives me hope.

The rise of bioregional design as championed by the Future Observatory at the Design Museum in London gives me hope.

And I’ll leave you with the symbiotic nature/AI hope of my friends at Superflux and their project that asks, I guess – “What is it like to be a river?”…

https://superflux.in/index.php/work/nobody-told-me-rivers-dream/#

THANK YOU.

A palimpsest for a place: TheIncidental at Salone Di Mobile 2009

THE INCIDENTAL 01, originally uploaded by dcharny.

The year of the papernet continues a-pace!

Very exciting this morning to see the first edition of The Incidental, a project done for the British Council by Schulze&Webb, Fromnowon, Åbäke and others, for the Salone Di Mobile furniture and design event in Milan, which is about the biggest event in the product design world

I was lucky enough to get contacted by Daniel of Fromnowon early on in the genesis of the project, when they were moving the traditional thinking of staging an exhibition of British product design to a service/media ‘infrastructure intervention’ in the space and time of the event itself.

Something that was more alive and distributed and connected to the people visiting Salone from Britain, and also connecting those around the world who couldn’t be there.

From the early brainstorms we came up with idea of a system for collecting the thoughts, recommendations, pirate maps and sketches of the attendees to republish and redistribute the next day in a printed, pocketable pamphlet, which, would build up over the four days of the event to be a unique palimpsest of the place and people’s interactions with it, in it.

One thing that’s very interesting to me is using this rapidly-produced thing then becomes a ‘social object’: creating conversations, collecting scribbles, instigating adventures – which then get collected and redistributed. A feedback loop made out of paper, in a place.

We were clearly riffing on the work done by our friends at the RIG with their “Things our friends have written on the internet” and the thoughts of Chris Heathcote, Aaron and others who participated in Papercamp back in January. In many ways this may be the first commercial post-papercamp product? Or is it an unproduct?

Anyway – very pleased to see this in the world. The team in Milan is working hard to put it together live every night from things twittered and flickered and sketched and kvetched in the galleries and bars. It seems they turned it around in good time, with the distributors going out with their customer-designed delivery bags and bikes at 8am this morning…

Can’t wait to see how the palimpsest builds through the week, and also how ideas like this might build through events throughout the year.

Remember, if you have quests or questions for the roving reporters of The Incidental, then you can get hold of them @theincidental on Twitter.

UPDATE

I asked the roving reporters via @theincidental to track down Random International with Chris O’Shea‘s installation at Milan ’09, and they did!

Action-at-a-distance = Magic!

Reblog this post [with Zemanta]

My outboard brain = my walking city



WALKING CITY, originally uploaded by blackbeltjones.

Jonathan Feinberg emailed me and said “Inspired by your typographically sophisticated “hand-tooled” cloud, I came up with a novel way of cramming a bunch of words together.” which is underserved praise for me, and dramatically underselling what he’s acheived with Wordle.
It does the simple and much-abused thing of creating a tag-cloud, and executes it playfully and beautifully. There are loads of options for type and layout, and it’s enormous fun to fiddle with.
As I said back when Kevan Davies did his delicious phrenology visualiser, there is some apophenic pleasure in scrying your tag could and seeing the patterns there – so I was very pleased when my playing with Wordle returned me an Archigram-esque walking city of things I’ve found interesting.
Congrats to Jonathan on building and finally releasing Wordle!

“Context-Handback”

Unknown Pleasures Album Cover

“Context-Handback” is something I find that I want nearly everything – or my everyware, at least – to do.

What do I mean?

An inverse-concrete example: something that can’t perform context-handback is my new little iPod shuffle.

I bought it last weekend after a longish break from the Jobs/Ive Hegemon, in order to play some of the iTunes purchased DRM’d gear I’m stuck with, and also because it’s just gorgeous as an object.

More perfect than the perfect thing it seems in both build quality and simplicity.

Foe had owned an original shuffle before but I’d never tried it – I’m finding thought that I really love the surrender to the flow of your own music – music that you perhaps didn’t realise you owned or had neglected, surfaced by the pseudo-stochastic, inscrutable selectah inside the tiny metal extrusion.

Perhaps I’m prepped to enjoy this semi-surprising personal radio station by my other semi-surprising personal radios – last.fm and pandora.

I listen to a lot of last.fm at work, and I find its recommendations only more and more rewarding over time.

But I find I obsess now on feeding it more and more – I want to handback to it from all of my musical consumption – my shuffle, the radio on my N95, shazam-tags from something playing in the pub – everything.

I want to bring it offerings.

And there’s the rub – so little of that musical consumption, in fact the bulk of it done on the go – can be offered back to last.fm.

It’s so frustrating that my musical discoveries and rediscoveries can’t feed back into creating more, or even that I can’t see what I enjoyed in iTunes when I synchronise with
the shuffle.

Faltering steps towards remedying this trivial problem can be seen in something like this hacked-up scrobbler for mobile in S60 python.

More context-handback hopefully in the next few years, until then – unknown pleasures.

What put the “architecture” into “information architecture”?

From Peterme’s closing plenary at the IASummit:

“…I think that web 2.0 puts the “architecture” in information architecture. Think of an architect. They design the space. People flow through it, meet in it, contribute to it.! With that model, the bulk of information architecture currently on the web isn’t really architecture — it’s some form of hyperdimensional document organizing. We’re not creating a space that people move through, and engage with. We’re classifying material to be retrieved. But with web 2.0, we are providing an architecture — a space, a platform through which and upon which people move, contribute, and change…

…If information is a substrate running through an increasing amount of our “real-world” lives, and we believe that these web 2.0 principles are important for the future of information architecture, how do we merge the two?”

And

“as digital networked media pervades more and more of our lives, the idea of a discreet region called “cyberspace” starts to feel like an anachronism. Who here has a mobile phone on them? One that can send photos by email, for example? Well, you’re all carrying “cyberspace” in your pocket. And once that happens, distinguishing that from the “real world” becomes impossible.”

“In our wiki”

I had a  random friday afternoon thoughtfart while listening to Paul Morley/Strictly Kev’s 1hr remix of ‘raiding the 20th century’.

Listening to Morley‘s* cultural history of the cut-up on top of Kev’s sonic critique made me think how cool it would be to hear Melvyn Bragg and the "In our time" gang’s thursday morning ruminations on, for instance, Machiavelli – cut-and-pasted over mashed-up madrigals.

Putting this fancy to one side for one minute… it made me think of other superlayered participatory critique and knowledge construction – the Wikipedia.

If there were a transcript of "In our time" (is there?) why couldn’t that be munged with wikipedia like Stefan did with BBC news… and what if then new nodes were being formed by Melvyn, his guests and his audience – together, for everyone, every week, and cross-referenced to a unique culutral contextual product – the audio broadcast.

The mp3 of "In our time"  sliding into the public domain and onto the internet archive’s servers, every thursday rippling through the nöösphere reinvigorating the debate in the wikipedia, renewing collective knowledge.

"In Our Time" is great ‘campfire’ stuff – you have The Melv as the semi-naive interlocutor and trusted guide, the experts as authority to be understood and questioned… but it’s only 30 minutes and 4 people… what about scaling it way out into the wikinow?

How good would that be??!!!!

Of course a first step, a sheltered cove, would be to set up "In Our time" with their own wiki for Neal Stephenson Baroque Cycle / Pepys diary style annotations of the transcript and mp3..

The Melv’s own multimedia mash’d up many-to-many mp3 meme machine.

—-
Update: over the weekend, Matt Biddulph showed another example of how powerful mixing BBC web content with web-wide systems might be: with del.icio.us tags extending BBC Radio3’s content. Fantastic stuff.
—-

p.s. from a Bio of Morley found at pulp.net:
"Morley
earns a farthing every time Charlie’s Angels, Full Throttle is shown or
trailed, owing to his contribution as a member of the Art of Noise to
Firestarter by the Prodigy, which features a sample of the Art of
Noise’s Beat Box, used in the film. The pennies are mounting up."

Del.icio.us phrenology continued

Kevan has knocked up an awesome visualisation tool for a user’s del.icio.us tags.

Here’s what mine looks like:

mattlicious

As compared to my hand-tooled version.

Here’s what Chris looks like:

chrislicious

Here’s what Clay looks like:

claylicious

Here’s what Warren looks like:

warrenlicious

And here’s what Foe looks like:

foelicious

Kevan has named it:

extispicious, a. [L. extispicium an inspection of the innards for divination; extra the entrails + specer to look at.] Relating to the inspection of entrails for prognostication.

and it does feel a little bit mystical, but not guts, more tea-leaves. Or even phrenological, seeing the bumps in peoples outboard-brains…

» kevan.org: extisp.icio.us – charting the tags of del.icio.us users

Superfantastique! Keyword RSS of BBC News

This is great – someone has hacked a service that generates a custom RSS feed based on a keyword search of BBC News.

So, for example, if I wanted to track the UK Labour government’s idiotic plans for ID cards, I just type in “ID Cards” and get a RSS feed to put in my news reader of choice.

It also works for the rest of the BBC website, so if I wanted to track any content on the band “Franz Ferdinand”, I just type it in and get a feed, which will return content the BBC have got on the hip scottish artrock combo.

UPDATE: Someone from the BBC has asked me to take this entry down, so as a compromise I have removed the links to the site outlined above.
David who posted from the BBC to ask to remove the reference has replied more fully to the comments below, and raised some good practical challenges to doing RSS and connecting to web-services on huge content sites like the BBC News. Many thanks to him for taking the time to clarify and explain some of the issues.