The Cloud vs The Grid and Electrosheds workshop at AHO, Oslo, May 2023

It was wonderful to be invited back to AHO and Oslo in early May by my old friends and sometime colleagues there – and the opportunity to speak about past projects but also what I’m doing now at Lunar Energy.

My framing was around the two biggest human-built machines on this planet – the cloud and the grid.

The former is the (much-younger) result of emergent properties and software that has traversed boundaries of territory and nations, while the latter has been a mainly top-down, deliberate design which is very anchored to geography, regulation and legacy technology.

When you put them together you start to get some interesting new possibilities for our energy transition – e.g. virtual power plants as made possible by Lunar’s Gridshare platform.

The Cloud vs The Grid: talk for IxDA Oslo & AHO, May 2023

My talk was kindly hosted and supported by IxDA Oslo. Their extremely professional recording and transcription of the event was turned around in record time and can be found here.

Thanks to everyone who came, and your thoughtful questions and conversations afterwards. It was a lot of fun, and a delight to be back in Oslo after more than a decade.

The next day I hosted a workshop with Mosse at AHO for her students, which I entitled “Electrosheds”, after Kevin Kelly‘s “Big Here Quiz” that aims to locate you at the heart of their watersheds and local ecologies. I’ll talk about that in a separate post.

Optometrists, Octopii, Rubber Ducks & Centaurs: my talk at Design for AI, TU Delft, October 2022

I was fortunate to be invited to the wonderful (huge) campus of TU Delft earlier this year to give a talk on “Designing for AI.”

I felt a little bit more of an imposter than usual – as I’d left my role in the field nearly a year ago – but it felt like a nice opportunity to wrap up what I thought I’d learned in the last 6 years at Google Research.

Below is the recording of the talk – and my slides with speaker notes.

I’m very grateful to Phil Van Allen and Wing Man for the invitation and support. Thank you Elisa Giaccardi, Alessandro Bozzon, Dave Murray-Rust and everyone the faculty of industrial design engineering at TU Delft for organising a wonderful event.

The excellent talks of my estimable fellow speakers – Elizabeth Churchill, Caroline Sinders and John can be found on the event site here.


Video of Matt Jones “Designing for AI” talk at TU Delft, October 2022

Slide 1

Hello!

Slide 2

This talk is mainly a bunch of work from my recent past – the last 5/6 years at Google Research. There may be some themes connecting the dots I hope! I’ve tried to frame them in relation to a series of metaphors that have helped me engage with the engineering and computer science at play.

Slide 3

I won’t labour the definition of metaphor or why it’s so important in opening up the space of designing AI, especially as there is a great, whole paper about that by Dave Murray-Rust and colleagues! But I thought I would race through some of the metaphors I’ve encountered and used in my work in the past.

The term AI itself is best seen as a metaphor to be translated. John Giannandrea was my “grand boss” at Google and headed up Google Research when I joined. JG’s advice to me years ago still stands me in good stead for most projects in the space…

But the first metaphor I really want to address is that of the Optometrist.

This image of my friend Phil Gyford (thanks Phil!) shows him experiencing something many of us have done – taking an eye test in one of those wonderful steampunk contraptions where the optometrist asks you to stare through different lenses at a chart, while asking “Is it better like this? Or like this?”

This comes from the ‘optometrist’ algorithm work by colleagues in Google Research working with nuclear fusion researchers. The AI system optimising the fusion experiments presents experimental parameter options to a human scientist, in the mode of a eye testing optometrist ‘better like this, or like this?’

For me to calls to mind this famous scene of human-computer interaction: the photo enhancer in Blade Runner.

It makes the human the ineffable intuitive hero, but perhaps masking some of the uncanny superhuman properties of what the machine is doing.

The AIs are magic black boxes, but so are the humans!

Which has lead me in the past to consider such AI-systems as ‘magic boxes’ in larger service design patterns.

How does the human operator ‘call in’ or address the magic box?

How do teams agree it’s ‘magic box’ time?

I think this work is as important as de-mystifying the boxes!

Lais de Almeida – a past colleague at Google Health and before that Deepmind – has looked at just this in terms of the complex interactions in clinical healthcare settings through the lens of service design.

How does an AI system that can outperform human diagnosis (Ie the retinopathy AI from deep mind shown here) work within the expert human dynamics of the team?

My next metaphor might already be familiar to you – the centaur.

[Certainly I’ve talked about it before…!]

If you haven’t come across it:

Gary Kasparov famously took on chess-AI Deep Blue and was defeated (narrowly)

He came away from that encounter with an idea for a new form of chess where teams of humans and AIs played against other teams of humans and AIs… dubbed ‘centaur chess’ or ‘advanced chess’

I first started investigating this metaphorical interaction about 2016 – and around those times it manifested in things like Google’s autocomplete in gmail etc – but of course the LLM revolution has taken centaurs into new territory.

This very recent paper for instance looks at the use of LLMs not only in generating text but then coupling that to other models that can “operate other machines” – ie act based on what is generated in the world, and on the world (on your behalf, hopefully)

And notion of a Human/AI agent team is something I looked into with colleagues in Google Research’s AIUX team for a while – in numerous projects we did under the banner of “Project Lyra”.

Rather than AI systems that a human interacts with e.g. a cloud based assistant as a service – this would be pairing truly-personal AI agents with human owners to work in tandem with tools/surfaces that they both use/interact with.

And I think there is something here to engage with in terms of ‘designing the AI we need’ – being conscious of when we make things that feel like ‘pedal-assist’ bikes, amplifying our abilities and reach vs when we give power over to what political scientist David Runciman has described as the real worry. Rather than AI, “AA” – Artificial Agency.

[nb this is interesting on that idea, also]

We worked with london-based design studio Special Projects on how we might ‘unbox’ and train a personal AI, allowing safe, playful practice space for the human and agent where it could learn preferences and boundaries in ‘co-piloting’ experiences.

For this we looked to techniques of teaching and developing ‘mastery’ to adapt into training kits that would come with your personal AI .

On the ‘pedal-assist’ side of the metaphor, the space of ‘amplification’ I think there is also a question of embodiment in the interaction design and a tool’s “ready-to-hand”-ness. Related to ‘where the action is’ is “where the intelligence is”

In 2016 I was at Google Research, working with a group that was pioneering techniques for on-device AI.

Moving the machine learning models and operations to a device gives great advantages in privacy and performance – but perhaps most notably in energy use.

If you process things ‘where the action is’ rather than firing up a radio to send information back and forth from the cloud, then you save a bunch of battery power…

Clips was a little autonomous camera that has no viewfinder but is trained out of the box to recognise what humans generally like to take pictures of so you can be in the action. The ‘shutter’ button is just that – but also a ‘voting’ button – training the device on what YOU want pictures of.

There is a neural network onboard the Clips initially trained to look for what we think of as ‘great moments’ and capture them.

It had about 3 hours battery life, 120º field of view and can be held, put down on picnic tables, clipped onto backpacks or clothing and is designed so you don’t have to decide to be in the moment or capture it. Crucially – all the photography and processing stays on the device until you decide what to do with it.

This sort of edge AI is important for performance and privacy – but also energy efficiency.

A mesh of situated “Small models loosely joined” is also a very interesting counter narrative to the current massive-model-in-the-cloud orthodoxy.

This from Pete Warden’s blog highlights the ‘difference that makes a difference’ in the physics of this approach!

And I hope you agree addressing the energy usage/GHG-production performance of our work should be part of the design approach.

Another example from around 2016-2017 – the on-device “now playing” functionality that was built into Pixel phones to quickly identify music using recognisers running purely on the phone. Subsequent pixel releases have since leaned on these approaches with dedicated TPUs for on-device AI becoming selling points (as they have for iOS devices too!)

And as we know ourselves we are not just brains – we are bodies… we have cognition all over our body.

Our first shipping AI on-device felt almost akin to these outposts of ‘thinking’ – small, simple, useful reflexes that we can distribute around our cyborg self.

And I think this approach again is a useful counter narrative that can reveal new opportunities – rather than the centralised cloud AI model, we look to intelligence distributed about ourselves and our environment.

A related technique pioneered by the group I worked in at Google is Federated Learning – allowing distributed devices to train privately to their context, but then aggregating that learning to share and improve the models for all while preserving privacy.

This once-semiheretical approach has become widespread practice in the industry since, not just at Google.

My next metaphor builds further on this thought of distributed intelligence – the wonderful octopus!

I have always found this quote from ETH’s Bertrand Meyer inspiring… what if it’s all just knees! No ‘brains’ as such!!!

In Peter Godfrey-Smith’s recent book he explores different models of cognition and consciousness through the lens of the octopus.

What I find fascinating is the distributed, embodied (rather than centralized) model of cognition they appear to have – with most of their ‘brains’ being in their tentacles…

And moving to fiction, specifically SF – this wonderful book by Adrian Tchaikovsky depicts an advanced-race of spacefaring octopi that have three minds that work in concert in each individual. “Three semi-autonomous but interdependent components, an “arm-driven undermind (their Reach, as opposed to the Crown of their central brain or the Guise of their skin)”

I want to focus on the that idea of ‘guise’ from Tchaikovsky’s book – how we might show what a learned system is ‘thinking’ on the surface of interaction.

We worked with Been Kim and Emily Reif in Google research who were investigating interpretability in modest using a technique called Tensor concept activation vectors or TCAVs – allowing subjectivities like ‘adventurousness’ to be trained into a personalised model and then drawn onto a dynamic control surface for search – a constantly reacting ‘guise’ skin that allows a kind of ‘2-player’ game between the human and their agent searching a space together.

We built this prototype in 2018 with Nord Projects.

This is CavCam and CavStudio – more work using TCAVS by Nord Projects again, with Alison Lentz, Alice Moloney and others in Google Research examining how these personalised trained models could become reactive ‘lenses’ for creative photography.

There are some lovely UI touches in this from Nord Projects also: for instance the outline of the shutter button glowing with differing intensity based on the AI confidence.

Finally – the Rubber Duck metaphor!

You may have heard the term ‘rubber duck debugging’? Whereby your solve your problems or escape creative blocks by explaining out-loud to a rubber duck – or in our case in this work from 2020 and my then team in Google Research (AIUX) an AI agent.

We did this through the early stages of covid where we felt keenly the lack of informal dialog in the studio leading to breakthroughs. Could we have LLM-powered agents on hand to help make up for that?

And I think that ‘social’ context for agents in assisting creative work is what’s being highlighted here by the founder of MidJourney, David Holz. They deliberated placed their generative system in the social context of discord to avoid the ‘blank canvas’ problem (as well as supercharge their adoption) [reads quote]

But this latest much-discussed revolution in LLMs and generative AI is still very text based.

What happens if we take the interactions from magic words to magic canvases?

Or better yet multiplayer magic canvases?

There’s lots of exciting work here – and I’d point you (with some bias) towards an old intern colleague of ours – Gerard Serra – working at a startup in Barcelona called “Fermat

So finally – as I said I don’t work at this as my day job any more!

I work for a company called Lunar Energy that has a mission of electrifying homes, and moving us from dependency on fossil fuels to renewable energy.

We make solar battery systems but also AI software that controls and connects battery systems – to optimise them based on what is happening in context.

For example this recent (September 2022) typhoon warning in Japan where we have a large fleet of batteries controlled by our Gridshare platform.

You can perhaps see in the time-series plot the battery sites ‘anticipating’ the approach of the typhoon and making sure they are charged to provide effective backup to the grid.

And I’m biased of course – but think most of all this is the AI we need to be designing, that helps us at planetary scale – which is why I’m very interested by the recent announcement of the https://antikythera.xyz/ program and where that might also lead institutions like TU Delft for this next crucial decade toward the goals of 2030.

Open Lecture at CIID: “Keeping up with the Kardashevians”

Back in April 2022, I was invited to speak as part of CIID’s Open Lecture series on my career so far (!) and what I’m working on now at Moixa.com.

Naturally, It ends up talking about trying to reframe the energy transition / climate emergency from a discourse of ‘sustainability’ to one of ‘abundance’ – referencing Russian physicists and Chobani yoghurt.

Thank you so much to Simona, Alie and the rest of the crew for hosting – was a great audience with a lot of old friends showing up which was lovely (not that they spared the hard questions…).

I was on vacation at the time with minimal internet, so I ended up pre-recording the talk – allowing me the novelty of being able to heckle myself in the zoom chat…

CIID Open Lecture, Matt Jones, Apr 5 2022

CIID Open Lecture, April 2022

Hello, it’s very nice to be “here”!

Slide 1

Thank you CIID for inviting me. 

I’ll explain this silly title later, but for now let me introduce myself…

Slide 2

Simona and Alie asked me to give a little talk about my career, which sent me into a spiral of mortality and desperation of course. I’ve been doing whatever I do for a long time now. And the thing is I’m not at all sure that matters much.

Slide 3: My life vs Moore’s Law

I’m 50 this year, you see. I’m guessing everyone has some understanding of Moore’s Law by now – things get more powerful, cheaper and smaller every 18mo-2yrs or so. I thought about what that means for what I’ve done for the last 50 years. Basically everything I have worked on has changed a million-fold since I started working on it (not getting paid for it!)

Slide 4: Design vs Moore’s Law

If you’re a designer that works in, say, furniture or fashion – these effects are felt peripherally: maybe in terms of tools or techniques but not at the core of what you do. I don’t mean to pick on Barber Ogersby here, but they kind of started the same time as me. You get to do deep, good work in a framework of appreciation that doesn’t change that much. I get the sense that even this has changed radically in the last few years as well – for many good reasons.

Slide 5: A book about BERG would make no sense.

Anyway – when I was asked to look back on work from BERG days over a decade ago, it’s hard to pretend it matters in the same way as when you did it. But perhaps it matters in different ways? I’m trying here to look for those threads and ideas which might still be useful.

Slide 6: Pace Layers & Short-Circuits

In design for tech we are building on shifting (and short-circuiting) pace layers. I’ve always found it most useful to think them as connected, rather than separate. Slowly permeable cell membranes/semiconducting layers. New technology is often the wormhole or short-circuit across them.

Slide 7: BERG

So with that said, to BERG.

Slide 8: About BERG

BERG was a studio formed out of a partnership between Matt Webb and Jack Schulze. Tom Armitage joined – and myself shortly after. From there we grew to a small product invention and design consultancy of about 15 folks at our largest, but always around 8-9. It was a great size – I’m still proud of the stuff we took on, and the way we did it.

Slide 9: All you can see of systems are surfaces.

One of the central tenets of BERG: All you can see of systems are surfaces. The complexity and interdependency of the modern world is not evident. And you can make choices as designers about how to handle that. Most design orthodoxy (at the time) and now is to drive towards ‘seamlessness’ – but we preferred Mark Weizer’s exhortation for ‘beautiful seams’ that would increase the legibility of systems while still making attractive and easy to engage surfaces…

Slide 10: Moore’s Law meets Main St.

Another central tenet of the studio: “What got just got cheap and boring?” We looked to mass produced toys and electronics, rather than solely to the cutting edge for inspiration. Understanding what had passed through the ‘hype curve’ of the tech scene into what Gartner called ‘the trough of disillusionment’. This felt like the primordial soup of invention.

Slide 11: A tale of two Arthurs

We got called sci-fi. Design fiction. I don’t think we were Sci-fi. We were more like 18thC naturalists, trying to explain something we were in, to ourselves. think we were more Brian Arthur than Arthur C. Clarke. We didn’t want to see tech as magic.

Slide 12: The nature of technology

Brian Arthur’s book “The Nature of Technology” was a huge influence on me at the time (and continues to be.) An economist and scholar of network effects, he tries to establish how and why technology evolves and builds in value. In the book he explains how diverse ‘assemblages’ of scientific and engineering phenomenon combine into new inventions. The give and take between human/cultural needs and emergent technical phenomenon felt far more compelling and inspiring than the human centred design orthodoxy at the time.

Slide 13: Thinking through making / chatting is cheating

This emphasis on exploring phenomena and tech as a path to invention we referred to as ‘material exploration’ as a phase of every studio project. We led with it, privileged it in the way contemporaries emphasised user research – sometimes to our detriment! But the studio was a vehicle for this kind of curiosity – and it’s what powered it.

Sllide 14: Lamps for Google

“Lamps” was a very material-exploration heavy project for Google Creative Lab in 2010. It was early days for the commoditisation of computer vision, and also about the time that Google Glass was announced. We pushed this work to see what it would be like to have computer vision in the world *with you* as an actor rather than privatised behind glasses.

Slide 15: Smart Light

The premise was instead of computer vision reporting back to a personal UI, it would act in the world as a kind of projected ‘smart light’, that would have agency based on your commands.

Slide 16: Build your own ARKit, too early

To make these experiments we had to build a rig. An example here of how the pace-layers of past design work get short-circuited. Here’s our painstaking 2010, 10x the size, cost, pain DIY version of ARKit… which would come along only a few years later.

Slide 17: watch the “Dumb things, smart light” video here

This final piece takes the smart light idea to a ‘speculative design’ conclusion. What if we made very dumb interactive blocks that were animated with ‘smart light’ computation from the cloud… There’s a little bit of a critical thought in here, but mainly we loved how strange and beautiful this could look!

Slide 18: Schooloscope

We also treated data as a material for exploration. One of the projects I’m always proudest of was Schooloscope in 2009 (one of the first BERG studio projects for Channel4 in the UK) – led by Matt Webb, Matt Brown and Tom Armitage which did a fantastic job of reframing contentious school performance data from a cold emphasis on academic performance to a much more accessible and holistic presentation of a school for parents (and kids) to access. Each school’s performance data creates an avatar based on our predisposition to interpret facial expression (after Chernoff)

Slide 19: Suwappu

Another example of play – Suwappu was an exploration for a toy/media franchise for Dentsu. Each toy has an AR environment coded to it, and weekly updates to the the storylines and interactions between them was envisaged. Again a metaverse in… reverse?

Slide 20: Cars for Intel

A lot of the studio work we couldn’t talk about publicly or release. I don’t think I’ve ever shown this before for instance, which was work we did for Intel looking at the ‘connected car’ and how it might relate to digital technology in cities and people’s pockets.

Slide 21: Car as playhead in the city – video proto

A lot of it was video prototyping work – provocations and anticipated directions that Intel’s advance design group could show to device and car manufacturers alike – to sell chips!

Slide 22: Light-painting interaction surfaces in cars

We deployed our usual bag of tools here – and came up with some interesting speculations – for instance thinking about the whole car as interface in response to the emerging trend of the time of larger and larger touchscreen UI in the car (which I still think is dangerous/wrong!!!)

Intel cars: video proto of smart light car diagnostics – watch here

Here’s another example of smart light – computer vision and projection in one product: an inspection lamp that makes the inscrutable workings of the modern car legible to the owner.

Slide 24: manifesto

Something we wrote as part of a talk back then.

I guess we were metaverse-averse before there was a metaverse (there still isn’t a metaverse)

Slide 25: William Gibson’s obituary for BERG

I left BERG in 2013 – it stopped doing consulting and for a year it continued more focussed on it’s IoT product platform ‘bergcloud’ and Little Printer product.

In 2014 it shut up shop, which was marked by some nice things like this from William Gibson. Everyone went on to great things! Apple, Microsoft – and starting innovative games studio Playdeo for instance. In my case, I went to work for Google…

Slide 26: Google career vs Moore’s Law

So in 2013 I moved to NYC and started to work for Google Creative Lab – whom I’d first met working on the Lamps project. There I did a ton of concept and product design work which will never see the light of day (unfortunately) but also worked on a couple of things that made it out into the world.

Slide 27: Google Creative Lab: A Spacecraft for all

Creative Lab was part of the marketing wing of Google rather than the engineering group – so we worked often on pieces that showed off new products or tech – and also connected to (hopefully) worthy projects out in the world.

This piece called Spacecraft For All was a kind of interactive documentary of a group looking to salvage and repurpose a late 1970s NASA probe into an open-access platform for citizen science.

Slide 28: Google Creative Lab: A Spacecraft for all

It also got to show off how Chrome could combine video and webGL in what we thought was a really exciting way to explain stuff.

Slide 29: Project Sunroof

Another project I’m still pretty proud of from this period is Project Sunroof – a tool conceived of by google engineers working on search and maps to calculate the solar potential of a roof, and then connect people to solar installers.

Slide 30: Project Sunroof

We worked on the branding, interface and marketing of the service, which still exists in the USA. There were a number of other energy initiatives I worked on inside Google at the time – which was a much more curious (and hubristic!) entity back in the Larry/Sergey days – for good and for ill. 

Slide 31: Google Clips – On-device AI

One last project from Google – by this time (2016) I’d moved from Creative Lab to Google Research, working with a group that was pioneering techniques for on-device AI. Moving the machine learning models and operations to a device gives great advantages in privacy and performance – but perhaps most notably in energy use. If you process things ‘where the action is’ rather than firing up a radio to send information back and forth from the cloud, then you save a bunch of battery power… 

Clips is a little autonomous camera that has no viewfinder but is trained out of the box to recognise what humans generally like to take pictures of so you can be in the action. The ‘shutter’ button is just that – but also a ‘voting’ button – training the device on what YOU want pictures of.

Along with Clips, the ‘now playing’ audio recognition, many camera features in pixel phones and local voice recognition all came from this group. I thought of these ML-powered features not as the ‘brain-centered’ AI we think of from popular culture but more like the distributed, embodied neurons we have in our knees, stomach etc.

Slide 32: joining Moixa

At the beginning of this year I left Google and went to work for Moixa. Moixa is a energy tech company that builds HW and SW to help move humanity to 100% clean energy. More about them later!

Slide 33: Career vs Global Heating

Instead of overlaying Moore’s Law on this step of my career, instead another graph of an all-together less welcome progression. This is Professor Ed Hawkin’s visualisation of how the world has warmed from 1850 to 2018.

Slide 34: Fear, greed, despair and hope in climate tech

I’ve been thinking a lot – both prior to and since joining Moixa about design’s role in helping with the transition to clean energy. And I think something that Matt Webb often talked about back in BERG days about product invention has some relevance.

And we all love a 2×2, right? 

He related this story that he in turn had heard  (sorry I don’t have a great scholarly citation here) about there being 4 types of product: Fear, Despair, Hope and Greed products.

Fear products are things you buy to stop terrible things happening, Despair product you have to buy – energy, toilet paper… Greed products might get you and advantage in life, or make you richer somehow (investments, MBAs…) but Hope products speak to something aspirational in us.

What might this be for energy?

Slide 35: Ministry for the future & Family Guy

You probably thought I was going to reference The ministry for the Future by KSR, but hopefully I surprised you with Family Guy! It’s creator, Seth Macfarlane went on to create the optimistic, love-letter to Star Trek called “The Orville” and has this to say: 

“Dystopia is good for drama because you’re starting with a conflict: your villain is the world. Writers on “Star Trek: The Next Generation” found it very difficult to work within the confines of a world where everything was going right. They objected to it. But I think that audiences loved it. They liked to see people who got along, and who lived in a world that was a blueprint for what we might achieve, rather than a warning of what might happen to us.” – I think we can say the same for the work of design.

Slide 36: KSR – Anti-Anti-Utopia

I’m going to read this quote from Kim Stanley Robinson. It’s long but hopefully worth it. 

“It’s important to remember that utopia and dystopia aren’t the only terms here. You need to use the Greimas rectangle and see that utopia has an opposite, dystopia, and also a contrary, the anti-utopia. For every concept there is both a not-concept and an anti-concept. So utopia is the idea that the political order could be run better. Dystopia is the not, being the idea that the political order could get worse. Anti-utopias are the anti, saying that the idea of utopia itself is wrong and bad, and that any attempt to try to make things better is sure to wind up making things worse, creating an intended or unintended totalitarian state, or some other such political disaster. 1984 and Brave New World are frequently cited examples of these positions. In 1984 the government is actively trying to make citizens miserable; in Brave New World, the government was first trying to make its citizens happy, but this backfired. … it is important to oppose political attacks on the idea of utopia, as these are usually reactionary statements on the behalf of the currently powerful, those who enjoy a poorly-hidden utopia-for-the-few alongside a dystopia-for-the-many. This observation provides the fourth term of the Greimas rectangle, often mysterious, but in this case perfectly clear: one must be anti-anti-utopian.

Slide 37: Dear Alice for Chobani by The Line

I’ve been reading a lot of solar punk lately in search of such utopias. But – The absolute best vision of a desirable future I have seen in recent years has not come form a tech company, or a government – but a Yoghurt maker. This is the design of the future as a hope product.

“We worked closely with Chobani to realise their vision of a world worth fighting for. It’s not a perfect utopia, but a version of a future we can all reach if we just decide to put in the work. We love the aspiration in Chobani’s vision of the future and hope it will sow the seeds of optimism and feed our imagination for what the future could be. It’s a vision we can totally get behind. We couldn’t be more happy to be part of this campaign.” – The Line

Slide 38: Goal is not ‘sustainabilty’. Goal is to get to Type 1 Kardashev

In 1964 a physicist named Nikolai Kardashev proposed a speculative scale or typology of civilisations, based on their ability to harness energy.

Humans are currently at around .7 on the scale.

A Type I civilization is usually defined as one that can harness all the energy that reaches its home planet from its parent star (for Earth, this value is around 2×10^17 watts), which is about four orders of magnitude higher than the amount presently attained on Earth, with energy consumption at ≈2×10^13 watts as of 2020.

So, four orders of magnitude more energy is possible just from the solar potential of Earth.

A Type 1 future could be glorious. A protopia.

Slide 39: Moixa

At Moixa we make something that we hope is a building block of something like this – solar energy storage batteries that can be networked together with software to create virtual power plants, that can replace fossil fuels. It’s one part of our mission to create 100% electric homes this decade.

The home is a place where design and desire become important for change. I hope we can make energy transition in the home something that is aspirational and accessible with good design.

Slide 40: Saul Griffith’s Electrify

I’ve also been very inspired by Saul Griffith’s book “Electrify” – please go read it at once! It points out a ton of design and product opportunity over the coming decade to move to clean, electric-powered lives.

As he says: 

“I think our failure on fixing climate change is just a rhetorical failure of imagination. We haven’t been able to convince ourselves that it’s going to be great. It’s going to be great.”

– Saul Griffith

Slide 41: New manifesto!

I’ll finish with a couple more quotes:

“If we can make it through the second half of this century, there’s a very good chance that what we’ll end up with is a really wonderful world”

Jamais Cascio

“An adequate life provided for all living beings is something the planet can still do; it has sufficient resources, and the sun provides enough energy. There is a sufficiency, in other words; adequacy for all is not physically impossible. It won’t be easy to arrange, obviously, because it would be a total civilizational project, involving technologies, systems, and power dynamics; but it is possible. This description of the situation may not remain true for too many more years, but while it does, since we can create a sustainable civilization, we should. If dystopia helps to scare us into working harder on that project, which maybe it does, then fine: dystopia. But always in service to the main project, which is utopia.”

Kim Stanley Robinson

Slide 42 (of course!)

Speaking my brains about future brains this year

Got some fun speaking gigs lined up, mainly going to be talking (somewhat obliquely) about my work at Google AI over the last few years and why we need to make centaurs not butlers.

June

August

November

Then I’ll probably shut up again for a few years.

“Every setback has a huge silver lining” – Andrew Ritchie of Brompton on slow invention, design and manufacture in the UK

Andrew Ritchie in the Circle Of Bromptons

Rough notes from tonight’s talk by Andrew Ritchie, founder and inventor of the Brompton bicycle. Much paraphrasing and missing out of crucial bits I’m sure.

Andrew Ritchie/brompton

1st prototype for 1000 GBP in late seventies
Looking for a licensee
No big companies are actively looking to increase the risk they are exposed to or increase their portfolio of projects
Only option was to manufacture themselves
“Why don’t you find 30 ppl and charge them 250 for a bike you haven’t yet built and guarantee them their money back once the company is running”
18 months later… Still trying to manufacture…
“A degree in engineering is all very well but it’s not substitute for metal bashing”
1981 Small firms loan guarantee scheme (recently resuscitated?)
Pilot production, basic tooling, space in Kew nr the tube station
“Patrick the brazer said he’d worked in an open sided shed in Aberdeen, he didn’t mind the cold”
Hinge supplier stopped supplying, spent three months milling hinges himself from solid blocks of metal
“We needed a 150 grand to get going, got 80. That wasn’t going to stop us.”
1987, after the gales, moved into the railway arch…
“we got cracking and started making bikes. Everything went wrong.”
“change is a bloody nuisance” as conservative about his manufacturing as the channel/dealers were when he started. Patience
Sales abroad came to 2/3. Stayed the same every since.
7.5% discount to those dealers who paid in 10 days, never had any trouble collecting cash. Doesn’t know why it’s not common practice. Most firms give 2.5% and so people don’t bother to pay early.
Sturmey-archer disaster… Went bust. Stopped supply of the hub gears
German firm said we’ll do something special for you
“I didn’t like the five speed, so I made them more expensive…” People started buying more…
Titanium bits. The titanium workers in Russia are spinoff of ussr space program…
“I hate marketing. Lovely people but as far as I’m concerned make something good and people will buy it. You don’t need some touchy feely story.”
Cultural issues in growing a manufacturing company are the biggest challenge. Growth of 25% a year is the target, very challenging.
Wouldn’t have worked if this had been attempted quickly, all the failure and hardship has made the product and company what it is.
“bromptons are far too expensive at the moment, I’m very sorry.”
“there’s masses we can do to improve what we do, we’re always trying to improve”
“Took my time and solved problems because I didn’t have a business plan”
“I’m very glad the hinge supplier went bust, because that made me improve the design. If I’d continued there would have been thousands of bikes full of errors”
“all these setbacks had huge silver linings”

Antichronos

Webb me sent just this:

“What he came up with was three different temporal dimensions – the first moving very fast, at the speed of light, the second very slow and “vibrating slowly back and forth, as if the universe itself were a single string or bubble”, the third – antichronos – in reverse. We experience them as one, creating a three-way interference pattern, which accounts for sensations such as foresight, déjà vu, nostalgia and precognition. The compound nature of time, Robinson writes, “creates our perception of both transience and permanence, of being and becoming”. He’s shown the novel to people who are “much more serious about the time travel stuff” and they’re “having a blast”. “They immediately map my three strands of time onto their system. They think I’ve partially discovered the real thing,” he says gleefully.”

Ago weeks of couple a Utrecht in DxF2009 at gave I talk this to link to way nice a is which.

Polite, pertinent and pretty: a talk at Web2.0expo SF, April 2008

To which you could add ‘tardy’: a shameful two months after the event the slides and notes from the talk are now up online here. Sorry to everyone who asked for them – and thanks for your patience!

It was a presentation by Tom Coates and myself on an area that fascinates us both – the coming age of practical ubicomp/spimes/everyware.

Although hopefully grounded in some of the design ideas explored in our respective current projects, it was a whistlestop tour around the ideas and conversations of many.

The title slide shows Timo Arnall‘s everyware symbols and obviously, Adam Greenfield‘s and Bruce Sterling‘s books loom large, as well as the work of Dan Hill, Matthew Chalmers, Anne Galloway, Schulze and Webb, Christian Nold and many others who I’ve been fortunate to meet, mail or read around this subject.

There’s certainly some scenius going on. As if to underline this, Nicholas Nova’s posted his slides from what sounds like a fascinating talk today: “Digital Yet Invisible: Making Ambient Informatics More Explicit to People”.

Looking forward to a summer of more digital/physical brainfood…

Black Swan Green

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }



Black Swan Green, originally uploaded by blackbeltjones.

“It is inevitable that we will be massively blindsided by events, because our understanding is misled by an array of beguiling illusions about reality.”


Stewart Brand on Nassim Taleb, in his introduction to Taleb’s forthcoming Long Now [SALT] talk, entitled: “The Future Has Always Been Crazier Than We Thought”

London Games Festival: The Future of AI in games

Imperial

Just been to a talk at Imperial College London, put on as part of the London Games Festival, presenting viewpoints form the games industry (Peter Molyneux and someone from Eidos) and from AI Academia. Very accessible and interesting.

I’ve tried my best to do an Alice, but I’ve not quite got the knack – so far from verbatim notes below:

The future of AI in games
London Games Festival

4.10.06

peter molyneux, prof. mark cavazza., dr. simon colton

intro
john cass, icl

article in the economist from the summer (CF)

next challenge is to develop believable characters and intelligences in game worlds

bring together two communities: the game devlopers from industry and artificial intelligence research community from academia

take industry to a new level

—-

peter molyneux

this is the most interesting area of game design to him

sorry – on behalf of games industry for grabbing the term AI and totally abusing it.

there is very little real AI in games

AI is mistaken for
– navigation
– avoidance
– crude simulations
– scripted behaviour

this is where we are, where do we want to be?

we need a whole raft of REAL AI and we’re starting to get the processing power to do it. next gen consoles could be the key.

– agent AI: need for convincing characters, recognizing what you are doing as a player. we are doing so much more as players – more freedom, more emotion. fable2: friendship, family – relationships… how do this convincingly?

– cloning AI: online is here to stay and this creates big problems… what about having a clone of yourself to remain in a persistent world so you can stay ‘present’ when you should go to sleep (UK vs. australia)

– learning AI – adapting to players and play.

– balancing AI: we’ve failed because we are not mass market – we only appeal to a very small audience… biggest game = 20m should be 200m… one of the reasons we have not got the reach is that we have no way to balance the difficulty of the game – looking at how the player plays and balance the game play accordingly (cf. czymihalyi flow, robin hunicke’s work)

AI future – will change the way that games are designed, create new types of game, create unique experiences… my game experience will be different from yours. far more realistic worlds can be created… visually we are getting close, but need great AI to back this up otherwise they will feel flawed. i will be able to stand up in 5yrs time and say look at how games have changed due to AI.

—-
DR. MARK CAVAZZA, UNIVERSITY OF TEESIDE

AI for interactive storytelling

‘long term endeavor to reconcile linear story and interaction’

reincorporate aesthetic qualities of linear media

character-based storytelling: Hierarchical Task Network Planning (AI technique – look up?) to describe characters roles.

AI maintains consistency of the story, while allowing adaptation… but often driving towards satisfying conclusion (interactive storytelling is not just changing the ending!)

sitcom generator: each characters role is described as a HTN plan. (modelled on ‘Friends’)

dynamic interactions between characters contribute to generating multiple situation not encoded in the original roles.

sitcom chosen to test the theory – as they are essentially/generally simple story forms (not shakespeare!)

we are generating a lot of stories and a lot of them are rubbish… need to filter these… and we can only generate about 6mins…

what’s the diff between this and The Sims? Sims have no narrative drive, they react (narrative is in the eye of the beholder)

every time these characters act.. they have a plan.

silent movies atm, but next step is dialogue.
this is very processing power intensive, but making progress with small scaling demonstrations. (shows one) Scalability is not really there atm.

real challenge is to develop true interactive storytelling capabilities.

The world is an actor: worlds behaviour drives narrative events. blurring the boundaries of physics and AI – the world is ‘plotting against the character’… inspired by the ‘final destination’ movies!

the whole environment ‘has a plan’

its easy to look clever in AI in small exmaples, the real challenge is scability… but we think the principles here are sound.

(doing research project with DTI/Eidos)

Dr. Simon Colton
AI and Games – Do’s and Don’ts

(games industry)unhealthy obsession #1: the modeling of opponents

(AI academia) unhealthy obsession #2: playing board games
From the machine learning journal: ‘learning to bid in bridge’ is a 30 yr project and it’s still going!

multiple mismatches in these two worlds
– what AI in games have low ram, low cycles, low time
– AI agents really want lots of ram, time, cycles

– ‘An AI’ that is referred to in games does not exist as termed by academia… a ‘complete AI’ would have emotional intelligence, reasoning, etc…

we’re developing AI the wrong way round – higher reasoning rather than basic instincts (cf. rodney brooks)

– ‘playing chess is a doddle compared to avoiding a tiger’

– AI researchers think it’s about BEATING the player, whereas games industry want AIs to help engage the player further in the game world.

so, what else can we do

– data mining game-play data
— changing how the game plays
– affective computing (HCI)
— how to tell from a players face what their emotional response is and changing game-play
– automatic avatars (to step in your place for sleep and toilet breaks!)
– but could be most useful in the design stage

comparison to the biotech industry
is designing a game more difficult than designing a drug? maybe? do drug companies have more funds? more IP issues? maybe?
BUT – drug companies absolutely make more use of AI in their design process than the games industry…

picks and shovels (where the money is) – getting the computer to program itself (misused phrase,but.. )
– machine learning
– genetic programming
— combining gives more than the sum of parts

one possible approach

evolutionary approach enables you to generate new entities for games – NPCs, cars, object… program AIs to use middle-ware to create these things

AI makes 100 bad models of a football – choose best 10 then breed… 1000s of generations later get valuable assets…

machine learns your aesthetic as a designer…

AI for game environment design

possible human-computer interaction in the design phase of games

designer creates a few building in his/her style
AI takes over and creates rest of city, designer refines the process…

great at design stage, but possibilities at run-time…

now the hard part: it’s still not easy to use AI/machine learning techniques in the off the shelf manners
– the best techniques come with a human (expert)

majority of AI academics don’t know how games are designed – start of a conversation?

summary: good AI opponents still a way off

AI people should think about engaging rather than conquering opponents

games people should think more about using AI tools in the design phase.

google: “AI bite”