Mundane maker magic: backyard bespoke manufacturing with Shapr3D and the AnkerMake M5

This is incredibly mundane, but like most blog posts that doesn’t stop me from writing it down.

We have a string of solar-powered LED lights in our backyard.

The plastic stand that it was supplied with broke, and so for the past few months it has been precariously balanced on various branches, bits of fence etc – falling off into shadow and powerlessness – blown by the wind or local cats on the prowl.

I wanted to make a replacement, but could never find the time / energy / foolhardiness to fire up one of the big sledgehammer CAD apps to crack this particular nut.

I’d played with Shapr3D in the past to quickly sketch things while working at Google – but never considered it for 3d printed output. I fired it up last night and in five minutes, with a glass of wine and taskmaster on in the background had made what I needed.

Designing the part (in five minutes) with Shaper3D on iPad

This morning, I printed it out on the AnkerMake M5. This thing has been a revelation.

I first got myself a cheap (<£500) 3D printer about 5 years ago – but it was damn fiddly.

It never really worked well, and the amount of set-up and breakdown after every (terrible) print meant that it sat unused most of the time while I sent prints off to be done, or while at Google prevailed upon colleagues to print something for me (shhhh)

The AnkerMake M5 is a different proposition entirely. While a little more expensive than my first (disappointing) machine – this lives up to the role that 3d printing has played in the imaginations of futurists, designers and maker-scene dilletantes (like me) for the past decade or so.

You print stuff. And it works. Fast.

That’s it.

Their bundled slicer software is pretty good – good enough for me at least, and you can have something like this little part spat out in about 5mins.

It is really fast.

I did one print – looked at it in-situ and decided it would need a bit more reinforcement. Back to Shaper3D, add a little reinforcing spine. While I’m at it, get fancy and countersink the screws with a little chamfer. Export STL, send to the slicer, print.

Final part

I’d iterated on the part and printed it in 15mins.

All that was left was to screw it to the fence, and the panel dangles in the wind no more – soaking up photons and making electrons for the LEDs to sip on at night.

Installed on fence

Now I’m sat in a cafe while my son plays football writing this – and thought I’d play with the visualisation and AR tools in Shaper3D.

Amazing that this is on a cheapish, non-super-powerful tablet (>3yrs old iPad) and accessible enough for non-experts and maybe even kids.

Mundane maker magic on a Sunday.

Shaviro on Tchaikovsky’s Corvids and LLMs

Came across Steven Shaviro’s thoughts on the critique of AI implicit in Adrian Tchaikovsky‘s excellent “Children of Memory” which I’d also picked up on – of course being Steven it’s much more eloquent and though-provoking so thought I’d paste it here.

The Corvids deny that they are sentient; the actual situation seems to be that sentience inheres in their combined operations, but does not quite exist in either of their brains taken separately. In certain ways, the Corvids in the novel remind me of current AI inventions such as ChatGPT; they emit sentences that are insightful, and quote bits and fragments of human discourse and culture in ways that are entirely apt; but (as with our current level of AI) it is not certain that they actually “understand” what they are doing and saying (of course this depends in part on how we define understanding). Children of Memory is powerful in the way that it raises questions of this sort — ones that are very much apropos in the actual world in terms of the powers and effects of the latest AI — but rejects simplistic pro- and con- answers alike, and instead shows us the difficulty and range of such questions. At one point the Corvids remark that “we know that we don’t think,” and suggests that other organisms’ self-attribution of sentience is nothing more than “a simulation.” But of course, how can you know you do not think without thinking this? and what is the distinction between a powerful simulation and that which it is simulating? None of these questions have obvious answers; the novel gives a better account of their complexity than the other, more straightforward arguments about them have done. (Which is, as far as I am concerned, another example of the speculative heft of science fiction; the questions are posed in such a manner that they resist philosophical resolution, but continue to resonate in their difficulty).

http://www.shaviro.com/Blog/?p=1866

Playing with the Grid (and cocktails)

In the introduction to Malcolm McCullough’s #breezepunk classic “Downtime on the Microgrid” he says…

This past week we (Lunar Energy) sponsored a social event for an energy industry conference in Amsterdam.

There were the usual opportunities offered to ‘dress’ the space, put up posters, screens etc etc.

We even got to name a cocktail (“lunar lift-off” i think they called it! I guess moonshots would have been a different kind of party…) – but what we landed on was… coasters…

Paulina Plizga, who joined us earlier this year came up with some lovely playful recycled cardboard coasters – featuring interconnected designs of pieces of a near-future electrical grid (enabled by our Gridshare software, natch) and stats from our experience in running digital, responsive grids so far.

Lunar Gridshare drinks coasters by Paulina Plizga

The inspiration for these partly reference Ken Garland’s seminal “Connect” game for Galt toys – could we make a little ‘infinite game’ with coasters as play pieces for the attendees.

Image by Ben Terrett

If you’ve ever been to something like this – and you’re anything like me – then you might want something to fidget with, help start a conversation… or just be distracted by for a moment! We thought these could also serve a social role in helping the event along, not just keep your drink from marking the furniture!

I was delighted when our colleagues who were attending said Paulina’s designs were a hit – and that they had actually used them to give impromptu presentations on Gridshare to attendees!

So a little bit more playful grid awareness over drinks! What could be better?

Thank you Ken Garland!

There’s a lovely site here with more of Ken’s wonderful work on games and toys for Galt. And here’s Ben’s post about it from a while back where I stole the picture from, as I’m writing this in a cafe and can’t take a picture of my set!

🐙 Octopii, Very fast, very heavy toddlers made of steel and self-driving tests

Jason points to a great piece on Large Language Models, ChatGPT etc

“Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other.

Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances.

Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.”

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html via Kottke.org

He goes onto talk about his experiences ‘managing’ a semi-self driving car (I think it might be a Volvo, like I used to own?) where you have to be aware that the thing is an incredible heavy, very fast toddler made of steel, with dunning-kruger-ish marketing promises pasted all over the top of it.

You can’t ever forget the self-driver is like a 4-year-old kid mimicking the act of driving and isn’t capable of thinking like a human when it needs to. You forget that and you can die.”

That was absolutely my experience of my previous car too.

It was great for long stretches of motorway (freeway) driving in normal conditions, but if it was raining or things got more twisty/rural (which they do in most of the UK quite quickly), you switched it off sharpish.

I’m renting a tesla (I know, I know) for the first time on my next trip to the states. It was a cheap deal, and it’s an EV, and it’s California so I figure why not. I however will not use autopilot I don’t think, having used semi (level 2? 3?) autonomous driving before.

Perhaps there needs to be a ‘self-driving test’ for the humans about to go into partnership with very fast, very heavy semi-autonomous non-human toddlers before they are allowed on the roads with them…

ReckonsGPT / Call My Bluffbot

This blog has turned into a Tobias Revell reblog/Stan account, so here’s a link to his nice riff on ChatGPT this week.

“LLMs are like being at the pub with friends, it can say things that sound plausible and true enough and no one really needs to check because who cares?”

Tobias Revell – “BOX090: THE TWEET THAT SANK $100BN

Ben Terrett was the first person I heard quoting (indirectly) Mitchell & Webb’s notion of ‘Reckons’ – strongly held opinions that are loosely joined to anything factual or directly experienced.

Send us your reckons – Mitchell & Webb

LLMs are massive reckon machines.

Once upon a BERG times, Matt Webb and myself used to get invited to things like FooCamp (MW still does…) and before hand we camped out in the Sierra Nevada, far away from any network connection.

While there we spent a night amongst the giant redwoods, drinking whisky and concocting “things that sound plausible and true enough and no one really needs to check because who cares”.

It was fun.

We didn’t of course then feed those things back into any kind of mainstream discourse or corpus of writings that would inform a web search…

In my last year at Google I worked a little with LaMDA.

The main thing that learned UX and research colleagues investigating how it might be productised seemed clear on was that we have to remind people that these things are incredibly plausible liars.

Moreover, anyone thinking of using it in a product that people should be incredibly cautious.

That Google was “late to market” with a ChatGPT competitor is a feature not a bug as far as I’m concerned. It shouldn’t be treated as an answer machine.

It’s a reckon machine.

And most people outside of the tech industry hypetariat should worry about that.

And what it means for Google’s mission of “Organising the worlds information and making it universally accessible’ – not that Google might be getting Nokia’d.

The impact of a search engine’s results on societies that treat them as scaffolding are the real problem…

Cory says it better here.

Anyway.

My shallow technoptimism will be called into question if I keep going like this so let’s finish on a stupid idea.

British readers of a certain vintage (mine) might recall a TV show called “Call my bluff – where plausible lying about the meaning of obscure words by charming middlebrow celebrities was rewarded.

Here’s Sir David Attenborough to explain it:

Attenborough on competitive organic LLMs as entertainment

It’s since been kinda remixed into Would I lie to you (featuring David Mitchell…) and if you haven’t watched Bob Mortimer’s epic stories from that show – go, now.

Perhaps – as a public service – the BBC and the Turing Institute could bring Call My Bluff back – using the contemporary UK population’s love of a competitive game show format (The Bake Off, Strictly, Taskmaster) to involve them in a adversarial critical network to root out LLMs’ fibs.

The UK then would have a massive trained model as a national asset, rocketing it back to post-Brexit relevance!

“Smaller, cuter, weirder, fluttery”: Filtered for the #Breezepunk Future

I’m stealing Matt Webb’s “filtered for” format here – for a bunch of more or less loosely connected items that I want to post, associate and log as much for myself as to share.

And – I’ll admit – to remove the friction from posting something without having a strong thread or thesis to connect them.

I’ve pre-ordered “No miracles needed” by Mark Jacobson – which I’m looking forward to reading in February. Found out about it through this Guardian post a week or so ago.

The good news below from Simon Evans seems to support Prof Jacobson’s hypothesis…

Breezepunk has been knocking around in my head since Tobias mentioned it on this podcast…

Here’s the transcript of the video (transcribed by machine, of course) of Tobias describing the invention by scientists/engineers at Nanyang Polytechnic in Singapore – of a very small scale, low power way of harnessing wind energy:

“I found this sort of approach really interesting but mostly I like the small scale of it yes I like the fact that it’s you know it’s something that you could imagine just proliferating as a standard component that’s attached to sort of Street Furniture or things around the house or whatever it is you might put them on your windowsill because they’re quite small and they just generate like enough power to make a sensor work or a light or something and yeah it’s this this alternative future to the big powerful set piece green Energy Future that’s obviously being pushed and should continue to be pushed because that’s competing against the big Power and the fossil fuel future but I like this idea of like the smaller cuter weirder fluttery imagine it’s quite fluttery yeah so yeah so this is this is Breeze Punk everybody…”

I like the idea of it being a standard component – a lego. A breezeblock?

Breezepunk breezeblock?

My sketching went from something initially much more like a bug hotel or one of those bricks that bees are meant to nest in, there’s something like a fractal Unite D’Habitation happening in the final sketch.

I also like #Breezepunk a lot – very Chobani Cinematic Universe.

I would like it to become… a thing. I suppose that’s why I’m writing this.

Used to be how you made things become things.

It’s probably not how you do it now, you need a much larger coordinated cultural footprint across various short-form streaming formats to make a dent in the embedding space of the LLMs.

Mind you, that’s not the same as making it ‘real’ or even ‘realish’ now is it.

A bit vogue-ish perhaps, to prove a point I asked ChatGPT what it knew about Breezepunk.

It took a while, but… it tried to turn into the altogether less satisfying “windpunk”

I like making the cursor blink on ChatGPT.

The longer the better. I think it means you’re onto something.

Or maybe that’s just my Bartle-type showing again.

The production design of the recent adaptation of William Gibson’s The Peripheral seemed “fluttery” – particularly in it’s depiction of the post-jackpot London timeline.

Or perhaps the aesthetic is much more one of ‘filigree‘.

There’s heaviness and lightness being expressed as power by the various factions in their architecture, fashion, gadgets.

It’s an overt expression of that power being wielded via nanotechnology – assemblers, disassemblers constructing and deconstructing huge edifices at will.

From Vincenzo Natali’s concept art for The Peripheral series

Solid melting into air.

Into the breeze.

Punk.

Stochastic Corvids (not parrots!): Far-future Uplifted Crows as commentary on ChatGPT / GPT-n

Over the holidays I’ve been really enjoying “Children of Memory”.

It’s the (last?) book in Adrian Tchaikovsky’s “Children of…” series – an eon-and-galaxy-spanning set of stories where uplifted descendants of earth creatures interact with the remains of humanity on (generally) badly-terraformed worlds.

One thing that struck like a gong was how perfectly-coincident my reading was with the rise of ChatGPT, and the surrounding hype and hot-takes. Matt W’s recent post on AI and sentience pushed me over the edge.

I suspect the author of a tremendous feat of ‘skating to where the puck will be” based on GPT-3 etc.

Without giving too much away, one of the uplifted life forms is a race of corvids – known as the Corvids, who exist as bonded pairs.

They are a kind of organic GAN or generative-adversarial network, constantly dismantling everything around them – learning and bickering their way toward incredibly effective solutions that other species miss – and leading to the other species in the book to speculate on their sentience in much the same way as many in the last year or two have around GPT-n – including an advanced AI based on an uploaded human (who runs on a computational substrate made of ants, by the way…)

Hear are a few passages from late in the book where that AI questions them around their apparent sentience:

Strutting around and shaking out their wings. Through all the means available to her, she watches them and tries to work out what it must be like to be them. Do they understand what has happened to them? They say they do, but that’s not necessarily the same thing.

She thinks of problem-solving AI algorithms from back in her day, which could often find remarkably unintuitive but effective solutions to things whilst being dumb as bricks in all other respects. And these were smart birds, nothing like that. She wanted them to drop the act, basically. She wanted them to shrug and eye each other and then admit they were human-like intellects, who’d just been perpetrating this odd scam for their own amusement. And yet the birds mutter to one another in their own jabber, quote poetry that predates whole civilizations, and refuse to let her in.

The two birds stand side by side, stiff as parade ground soldiers. As though they’re about to defend their thesis or give a final report to the board. ‘We understand the principles you refer to,’ Gothi states. ‘It was a matter that much concerned our progenitors on Rourke, after diplomatic relations were established between our two houses both alike in dignity.’ Word salad, as though some Dadaist was plucking ideas at random from a hat and ending up by chance with whole sentences. ‘Sentience,’ adds Gethli. ‘Is what is a what? And, if so, what?’ ‘You think,’ Kern all but accuses them. ‘You’d think we think,’ he either answers or gives back a mangled echo. ‘But we have thought about the subject and come to the considered conclusion that we do not think. And all that passes between us and within us is just mechanical complexity.’ ‘We have read the finest behavioural studies of the age, and do not find sentience within the animal kingdom, save potentially in that species which engineered us,’ Gothi agrees. ‘You’re telling me that you’re not sentient,’ Kern says. ‘You’re quoting references.’ ‘An adequate summation,’ Gethli agrees.

‘The essential fallacy,’ Gothi picks up, ‘is that humans and other biologically evolved, calculating engines feel themselves to be sentient, when sufficient investigation suggests this is not so. And that sentience, as imagined by the self-proclaimed sentient, is an illusion manufactured by a sufficiently complex series of neural interactions. A simulation, if you will.’ ‘On this basis, either everything of sufficient complexity is sentient, whether it feels itself to be or not, or nothing is,’ Gethli tells her. ‘We tend towards the latter. We know we don’t think, so why should anything else?’ ‘And in the grander scheme of things, it’s not important,’ Gothi concludes imperiously.

Children of Memory, Adrian Tchaikovsky

Wonderful stuff. Hugely recommended.

Does any one know if Mr Tchaikovsky has commented on what approaches a keen-eyed (magpie?) satire in his work of current AI hype?