ChatGPT vs Breezepunk: Update

Prior attempts to get LLMs to opine on futurist neologisms had not impressed but having installed the chatGPT app on my phone to try the voice interface, I thought I’d try again.

This, while typically bloodless, is better than I’d expected…

“As of my last update in January 2022, “breezepunk” isn’t a widely recognized term in popular culture, art, or design. The “-punk” suffix is often added to words to describe a subgenre or aesthetic, like “steampunk,” “cyberpunk,” or “solarpunk.” These genres often mix futuristic or alternative technology with certain periods of design or cultural elements.

If “breezepunk” were to follow this trend, it might describe an aesthetic or subgenre that emphasizes wind or air-based technologies, or perhaps a world where breezes, winds, or the atmosphere are central themes, combined with specific cultural or temporal elements. It could also be an aesthetic inspired by the feeling of breezy, airy places or concepts.

However, it’s important to note that without a clear and widely accepted definition, “breezepunk” can be interpreted in many ways. If it has emerged as a new term or trend after 2022, you might need to consult contemporary sources or communities for a current definition.”

Electrosheds workshop at AHO, May 2023

It was wonderful to be invited back to AHO after such a long time to give a talk (see separate post) and a workshop for the interaction design students.

I decided to try a “walkshop” following in the footsteps (ahem) of James Bridle, Adam Greenfield, Deb Chachra (and many others I’m sure) to investigate how the invisible networks of electricity in our everyday lives, and the environment around us.

I based it loosely on Kevin Kelly’s “Big Here” quiz – that aims to ask (tough) questions that locate you in the technical, logistical and natural ecologies we are embedded in.

If I’d thought of it i should have shown the first 30 minutes of the first episode of James Burke’s “Connections”too.

We started the day with a short talk from me (the slides of which are below) introducing the topic and how we’d examine it in the walkshop.

After that we went on a ‘local energy safari’ and then for a few hours the students prepared responses and communication pieces based on what they’d found. I’ll post some of those separately.

It was a beautiful spring day – which was perfect for a ‘walkshop’ – and the students were enthusiastic participants in what I think was a *partially* successful experiment.

I’ll write a bit about that in another post on their responses.

Huge thanks to Mosse for the invitation and all the AHO students for their energy and patience!

I’d love to try this again – or have others try it! Please do get in touch if you’d like to do it somewhere else in Europe, or better yet invite me to do it with you!

Update [September 28th 2023]: Before I did this I was sadly not aware of Jenny Odell’s fantastic 2013 project “Power Trip”, which explores this territory beautifully.

I found the project coincidentally while sending a friend Odell’s site, based on his discovery of some google maps derived artworks, which I’d associated for years with the artist.

Back at Google Creative Lab, we’d worked with her on creating giant murals on the sides of data centres – themselves places of infrastructural fascination and critique by many of the artists referenced in this workshop…


Electrosheds Intro talk

One of my favourite pieces by Kevin Kelly is this – the ‘watershed quiz’.

In this he asks a set of questions which locate you in your ‘Big Here’.

You start where you are, and begin to pull the thread out to larger and larger scales…

“You live in the big here. Wherever you live, your tiny spot is deeply intertwined within a larger place, imbedded fractal-like into a whole system called a watershed, which is itself integrated with other watersheds into a tightly interdependent biome. At the ultimate level, your home is a cell in an organism called a planet. All these levels interconnect. What do you know about the dynamics of this larger system around you? Most of us are ignorant of this matrix. But it is the biggest interactive game there is. Hacking it is both fun and vital.”

The Big Here Quiz, Kevin Kelly https://kk.org/cooltools/the-big-here-qu/
Questions from The Big Here Quiz, Kevin Kelly https://kk.org/cooltools/the-big-here-qu/
Tubes by Andrew Blum

Andrew’s book is a striking piece of “Big Here” writing – pulling on the thread of his squirrel-sabotaged internet cabling and ending up half way around the world watching divers swim ashore carrying backbone-fibre over their shoulders.

I want us to do something similar with our energy, leaving this room and following where our energy is coming from, and noting how others are embedded similarly.

We’re going to leave AHO and ‘pull on the thread of your electrons’, like Andrew Blum did with his connectivity bits…

• From the power you touch & use out to the distribution, then transmission

• Look for hints of new topologies, local production and new forms – what might be taking hold, hybrids, commercial, official, unofficial, municipal, local, improvised…

Then

• Create a journal / map / notes to record your impressions

• A piece of communication to yourself

• To others

Please remember!

It doesn’t have to be “correct” – think like an amateur naturalist… record observations, things you see and interpret.

Think about spotting phenomena: behaviour, difference and context from observation – not worrying if you have the correct names or specialist knowledge to understand the system in abstract.

This morning I tried pulling on the thread from the apartment I’m staying at…

Electrical touch points in the apartment block I stayed in Oslo.

I looked up the names I found on the various (old) bits of electrical infrastructure in the apartment. This gave me some threads to pull on.

To pull on those threads I consulted the wonderful Open Infrastructure Map.

The area around AHO on Open Infrastructure Map

Do you recognise this building?

Akersberget Substation, across the street from AHO

Yep – it’s right across the street from AHO. And it’s the first link in a big chain from this area out to where the electrons we’re using right now probably originate.

Let’s pull the thread!

Zooooooming out – we can see some next links in the chain

Zooming out from Grunerløkka on Open Infrastructure Map
Zooming out to see the electrical infrastructure of Central Oslo

Looking at this, we can make a decision to follow the thread through the Sogn Substation back to the generating sources.

Sogn Substation, Oslo
Zooming out to the area surrounding Oslo to view possible generation sources

Again, we can decide to follow the thread of our electrons to one of the nearest hydroelectric generators – Nore II around 180km away.

Nore II Hydroelectric station, ~180km NW of Oslo

We could drive there in about 3hrs – or take a very long but scenic cycle there in the extended Norwegian (summer) day…

I mean, it looks lovely there!

Nore II Power Station, image by Amit Rathore

So – just from your desktop you can explore pulling on your energy thread. But today, we’re going to go outside and walk around our area to see what we can find.

We’re going to explore the Grunerløkka area in groups

Preparing to leave for the walkshop

[We then left AHO in groups and explored the area in our “Local Energy Safari”]

We had lunch!

[After returning from the energy safari walkshop component, we attempted ‘design responses’ to what was found for about 90mins – this was in hindsight too short, but there were still some great outputs]

Now we’re going to make some designed responses to what we saw, recorded, found.

Again – these could be communications or mappings, or more generative/speculative responses. Here are some prompts from me, let’s see what we get!

Some prompts to get the students started

You might have spotted interesting new hybrids emerging – what could those lead to?

Think about new hybrid forms that are emerging as “energy on the street”

You could think about social structures that could emerge around adversity or abundance – for instance some of the energy-sharing practices that emerged around Occupy Sandy in NYC.

And for inspiration only, the work of Clifford Harper in 1970s on ‘radical technology’ reprogramming and using appropriate technology to share resources in a town

Clifford Harper, Radical Technology
Clifford Harper, Radical Technology

Again for inspiration – perhaps make a page from a future whole earth catalog documenting technology, practices, methods around your energy safari ideas.

The Whole Earth Catalog as a genre/format inspiration

The Cloud vs The Grid and Electrosheds workshop at AHO, Oslo, May 2023

It was wonderful to be invited back to AHO and Oslo in early May by my old friends and sometime colleagues there – and the opportunity to speak about past projects but also what I’m doing now at Lunar Energy.

My framing was around the two biggest human-built machines on this planet – the cloud and the grid.

The former is the (much-younger) result of emergent properties and software that has traversed boundaries of territory and nations, while the latter has been a mainly top-down, deliberate design which is very anchored to geography, regulation and legacy technology.

When you put them together you start to get some interesting new possibilities for our energy transition – e.g. virtual power plants as made possible by Lunar’s Gridshare platform.

The Cloud vs The Grid: talk for IxDA Oslo & AHO, May 2023

My talk was kindly hosted and supported by IxDA Oslo. Their extremely professional recording and transcription of the event was turned around in record time and can be found here.

Thanks to everyone who came, and your thoughtful questions and conversations afterwards. It was a lot of fun, and a delight to be back in Oslo after more than a decade.

The next day I hosted a workshop with Mosse at AHO for her students, which I entitled “Electrosheds”, after Kevin Kelly‘s “Big Here Quiz” that aims to locate you at the heart of their watersheds and local ecologies. I’ll talk about that in a separate post.

Mundane maker magic: backyard bespoke manufacturing with Shapr3D and the AnkerMake M5

This is incredibly mundane, but like most blog posts that doesn’t stop me from writing it down.

We have a string of solar-powered LED lights in our backyard.

The plastic stand that it was supplied with broke, and so for the past few months it has been precariously balanced on various branches, bits of fence etc – falling off into shadow and powerlessness – blown by the wind or local cats on the prowl.

I wanted to make a replacement, but could never find the time / energy / foolhardiness to fire up one of the big sledgehammer CAD apps to crack this particular nut.

I’d played with Shapr3D in the past to quickly sketch things while working at Google – but never considered it for 3d printed output. I fired it up last night and in five minutes, with a glass of wine and taskmaster on in the background had made what I needed.

Designing the part (in five minutes) with Shaper3D on iPad

This morning, I printed it out on the AnkerMake M5. This thing has been a revelation.

I first got myself a cheap (<£500) 3D printer about 5 years ago – but it was damn fiddly.

It never really worked well, and the amount of set-up and breakdown after every (terrible) print meant that it sat unused most of the time while I sent prints off to be done, or while at Google prevailed upon colleagues to print something for me (shhhh)

The AnkerMake M5 is a different proposition entirely. While a little more expensive than my first (disappointing) machine – this lives up to the role that 3d printing has played in the imaginations of futurists, designers and maker-scene dilletantes (like me) for the past decade or so.

You print stuff. And it works. Fast.

That’s it.

Their bundled slicer software is pretty good – good enough for me at least, and you can have something like this little part spat out in about 5mins.

It is really fast.

I did one print – looked at it in-situ and decided it would need a bit more reinforcement. Back to Shaper3D, add a little reinforcing spine. While I’m at it, get fancy and countersink the screws with a little chamfer. Export STL, send to the slicer, print.

Final part

I’d iterated on the part and printed it in 15mins.

All that was left was to screw it to the fence, and the panel dangles in the wind no more – soaking up photons and making electrons for the LEDs to sip on at night.

Installed on fence

Now I’m sat in a cafe while my son plays football writing this – and thought I’d play with the visualisation and AR tools in Shaper3D.

Amazing that this is on a cheapish, non-super-powerful tablet (>3yrs old iPad) and accessible enough for non-experts and maybe even kids.

Mundane maker magic on a Sunday.

Stochastic Corvids (not parrots!): Far-future Uplifted Crows as commentary on ChatGPT / GPT-n

Over the holidays I’ve been really enjoying “Children of Memory”.

It’s the (last?) book in Adrian Tchaikovsky’s “Children of…” series – an eon-and-galaxy-spanning set of stories where uplifted descendants of earth creatures interact with the remains of humanity on (generally) badly-terraformed worlds.

One thing that struck like a gong was how perfectly-coincident my reading was with the rise of ChatGPT, and the surrounding hype and hot-takes. Matt W’s recent post on AI and sentience pushed me over the edge.

I suspect the author of a tremendous feat of ‘skating to where the puck will be” based on GPT-3 etc.

Without giving too much away, one of the uplifted life forms is a race of corvids – known as the Corvids, who exist as bonded pairs.

They are a kind of organic GAN or generative-adversarial network, constantly dismantling everything around them – learning and bickering their way toward incredibly effective solutions that other species miss – and leading to the other species in the book to speculate on their sentience in much the same way as many in the last year or two have around GPT-n – including an advanced AI based on an uploaded human (who runs on a computational substrate made of ants, by the way…)

Hear are a few passages from late in the book where that AI questions them around their apparent sentience:

Strutting around and shaking out their wings. Through all the means available to her, she watches them and tries to work out what it must be like to be them. Do they understand what has happened to them? They say they do, but that’s not necessarily the same thing.

She thinks of problem-solving AI algorithms from back in her day, which could often find remarkably unintuitive but effective solutions to things whilst being dumb as bricks in all other respects. And these were smart birds, nothing like that. She wanted them to drop the act, basically. She wanted them to shrug and eye each other and then admit they were human-like intellects, who’d just been perpetrating this odd scam for their own amusement. And yet the birds mutter to one another in their own jabber, quote poetry that predates whole civilizations, and refuse to let her in.

The two birds stand side by side, stiff as parade ground soldiers. As though they’re about to defend their thesis or give a final report to the board. ‘We understand the principles you refer to,’ Gothi states. ‘It was a matter that much concerned our progenitors on Rourke, after diplomatic relations were established between our two houses both alike in dignity.’ Word salad, as though some Dadaist was plucking ideas at random from a hat and ending up by chance with whole sentences. ‘Sentience,’ adds Gethli. ‘Is what is a what? And, if so, what?’ ‘You think,’ Kern all but accuses them. ‘You’d think we think,’ he either answers or gives back a mangled echo. ‘But we have thought about the subject and come to the considered conclusion that we do not think. And all that passes between us and within us is just mechanical complexity.’ ‘We have read the finest behavioural studies of the age, and do not find sentience within the animal kingdom, save potentially in that species which engineered us,’ Gothi agrees. ‘You’re telling me that you’re not sentient,’ Kern says. ‘You’re quoting references.’ ‘An adequate summation,’ Gethli agrees.

‘The essential fallacy,’ Gothi picks up, ‘is that humans and other biologically evolved, calculating engines feel themselves to be sentient, when sufficient investigation suggests this is not so. And that sentience, as imagined by the self-proclaimed sentient, is an illusion manufactured by a sufficiently complex series of neural interactions. A simulation, if you will.’ ‘On this basis, either everything of sufficient complexity is sentient, whether it feels itself to be or not, or nothing is,’ Gethli tells her. ‘We tend towards the latter. We know we don’t think, so why should anything else?’ ‘And in the grander scheme of things, it’s not important,’ Gothi concludes imperiously.

Children of Memory, Adrian Tchaikovsky

Wonderful stuff. Hugely recommended.

Does any one know if Mr Tchaikovsky has commented on what approaches a keen-eyed (magpie?) satire in his work of current AI hype?

Optometrists, Octopii, Rubber Ducks & Centaurs: my talk at Design for AI, TU Delft, October 2022

I was fortunate to be invited to the wonderful (huge) campus of TU Delft earlier this year to give a talk on “Designing for AI.”

I felt a little bit more of an imposter than usual – as I’d left my role in the field nearly a year ago – but it felt like a nice opportunity to wrap up what I thought I’d learned in the last 6 years at Google Research.

Below is the recording of the talk – and my slides with speaker notes.

I’m very grateful to Phil Van Allen and Wing Man for the invitation and support. Thank you Elisa Giaccardi, Alessandro Bozzon, Dave Murray-Rust and everyone the faculty of industrial design engineering at TU Delft for organising a wonderful event.

The excellent talks of my estimable fellow speakers – Elizabeth Churchill, Caroline Sinders and John can be found on the event site here.


Video of Matt Jones “Designing for AI” talk at TU Delft, October 2022

Slide 1

Hello!

Slide 2

This talk is mainly a bunch of work from my recent past – the last 5/6 years at Google Research. There may be some themes connecting the dots I hope! I’ve tried to frame them in relation to a series of metaphors that have helped me engage with the engineering and computer science at play.

Slide 3

I won’t labour the definition of metaphor or why it’s so important in opening up the space of designing AI, especially as there is a great, whole paper about that by Dave Murray-Rust and colleagues! But I thought I would race through some of the metaphors I’ve encountered and used in my work in the past.

The term AI itself is best seen as a metaphor to be translated. John Giannandrea was my “grand boss” at Google and headed up Google Research when I joined. JG’s advice to me years ago still stands me in good stead for most projects in the space…

But the first metaphor I really want to address is that of the Optometrist.

This image of my friend Phil Gyford (thanks Phil!) shows him experiencing something many of us have done – taking an eye test in one of those wonderful steampunk contraptions where the optometrist asks you to stare through different lenses at a chart, while asking “Is it better like this? Or like this?”

This comes from the ‘optometrist’ algorithm work by colleagues in Google Research working with nuclear fusion researchers. The AI system optimising the fusion experiments presents experimental parameter options to a human scientist, in the mode of a eye testing optometrist ‘better like this, or like this?’

For me to calls to mind this famous scene of human-computer interaction: the photo enhancer in Blade Runner.

It makes the human the ineffable intuitive hero, but perhaps masking some of the uncanny superhuman properties of what the machine is doing.

The AIs are magic black boxes, but so are the humans!

Which has lead me in the past to consider such AI-systems as ‘magic boxes’ in larger service design patterns.

How does the human operator ‘call in’ or address the magic box?

How do teams agree it’s ‘magic box’ time?

I think this work is as important as de-mystifying the boxes!

Lais de Almeida – a past colleague at Google Health and before that Deepmind – has looked at just this in terms of the complex interactions in clinical healthcare settings through the lens of service design.

How does an AI system that can outperform human diagnosis (Ie the retinopathy AI from deep mind shown here) work within the expert human dynamics of the team?

My next metaphor might already be familiar to you – the centaur.

[Certainly I’ve talked about it before…!]

If you haven’t come across it:

Gary Kasparov famously took on chess-AI Deep Blue and was defeated (narrowly)

He came away from that encounter with an idea for a new form of chess where teams of humans and AIs played against other teams of humans and AIs… dubbed ‘centaur chess’ or ‘advanced chess’

I first started investigating this metaphorical interaction about 2016 – and around those times it manifested in things like Google’s autocomplete in gmail etc – but of course the LLM revolution has taken centaurs into new territory.

This very recent paper for instance looks at the use of LLMs not only in generating text but then coupling that to other models that can “operate other machines” – ie act based on what is generated in the world, and on the world (on your behalf, hopefully)

And notion of a Human/AI agent team is something I looked into with colleagues in Google Research’s AIUX team for a while – in numerous projects we did under the banner of “Project Lyra”.

Rather than AI systems that a human interacts with e.g. a cloud based assistant as a service – this would be pairing truly-personal AI agents with human owners to work in tandem with tools/surfaces that they both use/interact with.

And I think there is something here to engage with in terms of ‘designing the AI we need’ – being conscious of when we make things that feel like ‘pedal-assist’ bikes, amplifying our abilities and reach vs when we give power over to what political scientist David Runciman has described as the real worry. Rather than AI, “AA” – Artificial Agency.

[nb this is interesting on that idea, also]

We worked with london-based design studio Special Projects on how we might ‘unbox’ and train a personal AI, allowing safe, playful practice space for the human and agent where it could learn preferences and boundaries in ‘co-piloting’ experiences.

For this we looked to techniques of teaching and developing ‘mastery’ to adapt into training kits that would come with your personal AI .

On the ‘pedal-assist’ side of the metaphor, the space of ‘amplification’ I think there is also a question of embodiment in the interaction design and a tool’s “ready-to-hand”-ness. Related to ‘where the action is’ is “where the intelligence is”

In 2016 I was at Google Research, working with a group that was pioneering techniques for on-device AI.

Moving the machine learning models and operations to a device gives great advantages in privacy and performance – but perhaps most notably in energy use.

If you process things ‘where the action is’ rather than firing up a radio to send information back and forth from the cloud, then you save a bunch of battery power…

Clips was a little autonomous camera that has no viewfinder but is trained out of the box to recognise what humans generally like to take pictures of so you can be in the action. The ‘shutter’ button is just that – but also a ‘voting’ button – training the device on what YOU want pictures of.

There is a neural network onboard the Clips initially trained to look for what we think of as ‘great moments’ and capture them.

It had about 3 hours battery life, 120º field of view and can be held, put down on picnic tables, clipped onto backpacks or clothing and is designed so you don’t have to decide to be in the moment or capture it. Crucially – all the photography and processing stays on the device until you decide what to do with it.

This sort of edge AI is important for performance and privacy – but also energy efficiency.

A mesh of situated “Small models loosely joined” is also a very interesting counter narrative to the current massive-model-in-the-cloud orthodoxy.

This from Pete Warden’s blog highlights the ‘difference that makes a difference’ in the physics of this approach!

And I hope you agree addressing the energy usage/GHG-production performance of our work should be part of the design approach.

Another example from around 2016-2017 – the on-device “now playing” functionality that was built into Pixel phones to quickly identify music using recognisers running purely on the phone. Subsequent pixel releases have since leaned on these approaches with dedicated TPUs for on-device AI becoming selling points (as they have for iOS devices too!)

And as we know ourselves we are not just brains – we are bodies… we have cognition all over our body.

Our first shipping AI on-device felt almost akin to these outposts of ‘thinking’ – small, simple, useful reflexes that we can distribute around our cyborg self.

And I think this approach again is a useful counter narrative that can reveal new opportunities – rather than the centralised cloud AI model, we look to intelligence distributed about ourselves and our environment.

A related technique pioneered by the group I worked in at Google is Federated Learning – allowing distributed devices to train privately to their context, but then aggregating that learning to share and improve the models for all while preserving privacy.

This once-semiheretical approach has become widespread practice in the industry since, not just at Google.

My next metaphor builds further on this thought of distributed intelligence – the wonderful octopus!

I have always found this quote from ETH’s Bertrand Meyer inspiring… what if it’s all just knees! No ‘brains’ as such!!!

In Peter Godfrey-Smith’s recent book he explores different models of cognition and consciousness through the lens of the octopus.

What I find fascinating is the distributed, embodied (rather than centralized) model of cognition they appear to have – with most of their ‘brains’ being in their tentacles…

And moving to fiction, specifically SF – this wonderful book by Adrian Tchaikovsky depicts an advanced-race of spacefaring octopi that have three minds that work in concert in each individual. “Three semi-autonomous but interdependent components, an “arm-driven undermind (their Reach, as opposed to the Crown of their central brain or the Guise of their skin)”

I want to focus on the that idea of ‘guise’ from Tchaikovsky’s book – how we might show what a learned system is ‘thinking’ on the surface of interaction.

We worked with Been Kim and Emily Reif in Google research who were investigating interpretability in modest using a technique called Tensor concept activation vectors or TCAVs – allowing subjectivities like ‘adventurousness’ to be trained into a personalised model and then drawn onto a dynamic control surface for search – a constantly reacting ‘guise’ skin that allows a kind of ‘2-player’ game between the human and their agent searching a space together.

We built this prototype in 2018 with Nord Projects.

This is CavCam and CavStudio – more work using TCAVS by Nord Projects again, with Alison Lentz, Alice Moloney and others in Google Research examining how these personalised trained models could become reactive ‘lenses’ for creative photography.

There are some lovely UI touches in this from Nord Projects also: for instance the outline of the shutter button glowing with differing intensity based on the AI confidence.

Finally – the Rubber Duck metaphor!

You may have heard the term ‘rubber duck debugging’? Whereby your solve your problems or escape creative blocks by explaining out-loud to a rubber duck – or in our case in this work from 2020 and my then team in Google Research (AIUX) an AI agent.

We did this through the early stages of covid where we felt keenly the lack of informal dialog in the studio leading to breakthroughs. Could we have LLM-powered agents on hand to help make up for that?

And I think that ‘social’ context for agents in assisting creative work is what’s being highlighted here by the founder of MidJourney, David Holz. They deliberated placed their generative system in the social context of discord to avoid the ‘blank canvas’ problem (as well as supercharge their adoption) [reads quote]

But this latest much-discussed revolution in LLMs and generative AI is still very text based.

What happens if we take the interactions from magic words to magic canvases?

Or better yet multiplayer magic canvases?

There’s lots of exciting work here – and I’d point you (with some bias) towards an old intern colleague of ours – Gerard Serra – working at a startup in Barcelona called “Fermat

So finally – as I said I don’t work at this as my day job any more!

I work for a company called Lunar Energy that has a mission of electrifying homes, and moving us from dependency on fossil fuels to renewable energy.

We make solar battery systems but also AI software that controls and connects battery systems – to optimise them based on what is happening in context.

For example this recent (September 2022) typhoon warning in Japan where we have a large fleet of batteries controlled by our Gridshare platform.

You can perhaps see in the time-series plot the battery sites ‘anticipating’ the approach of the typhoon and making sure they are charged to provide effective backup to the grid.

And I’m biased of course – but think most of all this is the AI we need to be designing, that helps us at planetary scale – which is why I’m very interested by the recent announcement of the https://antikythera.xyz/ program and where that might also lead institutions like TU Delft for this next crucial decade toward the goals of 2030.

Pharaoh Lovelock

Kamasi Washington’s obit of Pharoah Sanders

“Here was a man who played spiritual, cosmic music, from whom I wanted to know the secrets to the universe. But he was more interested in being in the moment and recognising the power of being in the moment. He showed me that connecting with the great beyond is sometimes about the simplest things.”

John Gray remembering James Lovelock

“Jim attributed his great old age to long daily walks – he lived to 103 and right up to the end his mind was very vivid. I joined him sometimes wandering through his grounds, where he’d let Gaia have its will. He had a cat and once the cat sat on my shoulder through the entire walk.”