Household Spirits for Stations: InfoTotems at Brockley Station

I left my job at Lunar Energy last month and August has been about recharging – some holidays with family and also wandering London a bit catching up with folks, seeing some art/design, and generally regenerative flaneur-y.

Yesterday, for instance, was off to lunch with my talented friends at the industrial design firm Approach Studio in Hackney.

This entailed getting the overground, and in doing so found something wonderful at Brockley Station.

Placed along the platform were “InfoTotems” (at least that’s what they were called on the back of them). Sturdy, about 1.5m high and with – crucially in the bright SE London sunlight of August – easily-readable low-power E-Ink screens.

E-ink “InfoTotem” at Brockley Station, SE London

They seemed to function as very simple but effective dynamic way-finding, nudging me down the platform to where it predicted I’d find a less-busy carriage.

Location sensitive, dynamic signage: E-ink “InfoTotem” at Brockley Station, SE London

Wonderfully, when I did so, I got this message on the next InfoTotem.

“You’re in the right place”: contextual reassurance from E-ink “InfoTotem” at
Brockley Station, SE London

Nothing more than that – no extraneous information, just something very simple, reassuring and useful.

It felt really appropriate and thoughtful.

Not overreaching, over promising , overloading with *everything* else this thing could possible do as a software-controlled surface.

Very nice, TfL folks.

E-ink “InfoTotem” at Brockley Station, SE London: Back view, UID…

I’m going to try to do a bit more poking on the provenance of this work, and where it might be heading, as I find it really delightful.

So far, I think the HW itself might be this from a small UK company called Melford Technologies .

It made me recall one of my favourite BERG projects I worked on, “The Journey” which was for Dentsu London – looking at ways to augment the realities of a train journey with light touch digital interventions on media surfaces along the timeline of the experience.

Place-based reassurance: Sketch for “The Journey” work with BERG for Dentsu London

Place-based reassurance: E-Ink magnetic-backed dynamic signage. Still from “The Journey” work with BERG for Dentsu London

I think what I like about the InfoTotems – is that instead of a singular product doing a thing on the platform, it’s treated as a spatial experience between the HW surfaces, and as a result it feels like a service inhabiting the place, rather than just the product.

Without that overloading I was referring to, what else could they do?

Obviously this example of nudging me down the platform to a less-busy carriage is based on telemetry it’s received from the arriving train.

Could there be more that conveys the spirit of the place – observations or useful nuggets – that are connected to where you are temporarily, but where the totems sit more permanently.

In “The Journey” there’s a lovely short bit where Jack is travelling through the UK countryside and looks at a ticket that has been printed for him, a kind of low-res augmented reality.

It’s a prompt for him to look out the window to notice something, knowing where he’s sitting and what time he’s going to go past a landmark.

Could low-powered edge AI start do something akin to this? To build out context or connections between observations made about the surroundings?

Cyclist counter sign in Manchester UK, Image via road.cc

We’ve all seen signs that count – for example ‘water bottles filled’ or ‘bike riders using this route today’ – but an edge AI could perhaps do something more lyrical, or again use the multiple positioned screens along the platform to tell a more serialised, unique story.

Maybe something like Matt W’s Poem/1 e-ink clock – with industrial design from Approach Studio, coincidentally!

Maybe it has a memory of place, a journal. It would need some delicate, sensitive, playful non-creepy design – as well as technological underpinnings ie. Privacy preserving sensing and edge-AI.

I recall Matt Webb also working with Hoxton Analytics who were pursuing stuff in this space to create non-invasive sensing of things like traffic and footfall in commercial space.

In terms of edge AI that starts to relate to the spatial world, I’m tracking the work of friends who have started Archetype.ai to look at just that. I need to delve not it and understand it more.

Perhaps it would then also need something like the work that Patrick Keenan and others did back at Sidewalk Labs to create a typology of sensing in public places.

“A visual language to demystify the tech in cities.” – Patrick Keenan et al, Sidewalk Labs, c2019

Of course the danger is once we start covering it in these icons of disclosure, and doing more and more mysterious things with our totems, we lose the calm ‘just enough internet’ approach that I love so much about this current iteration.

Maybe they’re just right as they are – and I should listen to them…

Vision Pro(posals)

A couple of weeks ago, at the end of July, I booked a slot to try out the Apple Vision Pro.

It has been available for months in the USA, and might already be in the ‘trough of disillusionment’ there already – but I wanted to give it a try nonetheless.

I sat on a custom wood and leather bench in the Apple Store Covent Garden that probably cost more than a small family car, as a custom machine scanned my glasses to select the custom lenses that would be fitted to the headset.

I chatted to the personable, partially-scripted Apple employee who would be my guide for the demo.

Eventually the device showed up on a custom tray perfectly 10mm smaller than the custom sliding shelf mounted in the custom wood and leather bench.

The beautifully presented Apple Vision Pro at the Apple Store Covent Garden

And… I got the demo?

It was impressive technically, but the experience – which seemed to be framed as one of ‘experiencing content’ left me nonplused.

I’m probably an atypical punter, but the bits I enjoyed the most were the playful calibration processes, where I had to look at coloured dots and pinch my fingers, accompanied by satisfying playful little touches of motion graphics and haptics.

That is, the stuff where the spatial embodiment was the experience was the most fun, for me…

Apple certainly have gone to great pains to try a and distinguish the Vision Pro from AR and VR – making sure it’s referenced throughout as ‘spatial computing’ – but there’s very little experience of space, in a kinaesthetic sense.

It’s definitely conceived of as ‘spatial-so-long-as-you-stay-put-on-the-sofa computing’ rather than something kinetic, embodied.

The technical achievements of the fine grain recognition of gesture are incredible – but this too serves to reduce the embodied experience.

At the end of the demo, the Apple employee seemed to be noticeably crestfallen that I hadn’t gasped or flinched at the usual moments through the immersive videos of sport, pop music performance and wildlife.

He asked me what I would imagine using the Vision Pro for – and I said int he nicest possible way I probably couldn’t imagine using it – but I could imagine interesting uses teamed with something like Shapr3d and the Apple Pencil on my iPad.

He looked a little sheepish and said that wasn’t probably going to happen but sooner with SW updates, I could use the Vision Pro as an extended display. OK- that’s … great?

But I came away imagining more.

I happened to run into an old friend and colleague from BERG in the street near the Apple Store and we started to chat about the experience I’d just had.

I unloaded a little bit on them, and started to talk about the disappointing lack of embodied experiences.

We talked about the constraint of staying put on the sofa – rather than wandering around with the attendant dangers.

But we’ve been thinking about ‘stationary’ embodiment since Dourish, Sony Eyetoy and the Wii, over 20 years ago.

It doesn’t seem like that much of a leap to apply some of those thoughts to this new level of resolution and responsiveness that the Vision Pro presents.

With all that as a preamble – here are some crappy sketches and first (half-formed) thoughts I wanted to put down here.

Imagining the combination of a Vision Pro, iPad and Apple Pencil

Vision Pro STL Printer Sim

The first thing that came to mind in talking to my old colleague in the street was to take some of the beautiful realistically-embedded-in-space-with-gorgeous-shadows windows that just act like standard 2D pixel containers in the Vision Pro interface and turn them into ‘shelves’ or platens that you could have 3D virtual objects atop.

One idea was to extend my wish for some kind of Shapr3D experience into being able to “previsualise” the things I’m making in the real world. The app already does a great job of this with it’s AR features, but how about having a bit of fun with it, and rendering the object on the Vision Pro via a super fast, impossibly capable (simulated) 3d printer – that of course because it’s simulated can print in any material…

Sketch of Vision Pro 3d sim-printer
(Roughly) Animated sketch of Vision Pro 3d sim-printer

Once my designed objected had been “printed” in the material of my choosing, super-fast (and without any of the annoying things that can happen when you actually try to 3d print something…) I could of course change my scale in relation to it to examine details, place it in beautiful inaccessible immersive surroundings, apply impossible physics to it etc etc. Fun!


Vision Pro Pottery

Extending the idea of the virtual platen – could I use my iPad in combination with with Vision pro as a cross-over real/virtual creative surface in my field of view. Rather than have a robot 3d printer do the work for me, could I use my hands and sculpt something on it?

Could I move the iPad up and down or side to side to extrude or lathe sculpted shapes in space in front of me?

Could it spin and become a potter’s wheel with the detailed resolution hand detection of the Vision Pro picking up the slightest changes to give fine control to what I’m shaping.

Is Patrick Swayze over my shoulder?

Vision Pro + iPad sculpting in space.

Maybe it’s something much more throw-away and playful – like using the iPad as an extremely expensive version of a deformed wire coat-hanger to create streams of beautiful, iridescent bubbles as you drag it through the air – but perhaps capturing rare butterflies or fairies in them as you while away the hours atop Machu Picchu or somewhere similar where it would be frowned up to spill washing-up liquid so frivolously…

Making impossible bubbles with an iPad in Vision Pro world

Of course this interaction owes more than a little debt to a previous iPad project I saw get made first hand, namely BERG’s iPad Light-painting

Although my only real involvement in that project was as a photographic model…

Your correspondent behind an iPad-lightpainted cityscape (Image by Timo, of course)

Pencils, Pads, Platforms, Pots, Platens, Plinths

Perhaps there is an interesting little more general, sober, useful pattern in these sketches – of horizontal virtual/real crossover ‘plates’ for making, examining and swapping between embodied creation with pencil/iPad and spatial examination and play with the Vision Pro.

I could imagine pinching something from the vertical display windows ion Vision Pro to place onto my ipad (or even my watch?) in order to keep it, edit it, change something about it – before casting it back into the simulated spatial reality of the Vision Pro.

Perhaps it allows for a relationship between two realms that feels more embodied and ‘real’ without having to leave the sofa.

Perhaps it also allows for less ‘real’ but more fun stuff to happen in the world of the Vision Pro (which in the demo seems doggedly to anchor on ‘real’ experience verissimilitude – sport, travel, family, pop concerts)

Perhaps my Apple watch can be more of a Ben 10 supercontroller – changing into a dynamic UI to the environment I’m entering, much like it changes automatically when I go swimming with it and dive under…

Anyway – was very much worth doing the demo, I’d recommend it, if only for some quick stretching (and sketching) of the mindlegs.

My sketches in a cafe a few days after the demo

All in all I wish the Vision Pro was just *weirder*.

Back when it came out in the US in February I did some more sketches in reaction to that thought… I can’t wait to see something like a bonkers Gondry video created just for the Vision Pro…

Until then…