🐙 Octopii, Very fast, very heavy toddlers made of steel and self-driving tests

Jason points to a great piece on Large Language Models, ChatGPT etc

“Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other.

Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances.

Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.”

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html via Kottke.org

He goes onto talk about his experiences ‘managing’ a semi-self driving car (I think it might be a Volvo, like I used to own?) where you have to be aware that the thing is an incredible heavy, very fast toddler made of steel, with dunning-kruger-ish marketing promises pasted all over the top of it.

You can’t ever forget the self-driver is like a 4-year-old kid mimicking the act of driving and isn’t capable of thinking like a human when it needs to. You forget that and you can die.”

That was absolutely my experience of my previous car too.

It was great for long stretches of motorway (freeway) driving in normal conditions, but if it was raining or things got more twisty/rural (which they do in most of the UK quite quickly), you switched it off sharpish.

I’m renting a tesla (I know, I know) for the first time on my next trip to the states. It was a cheap deal, and it’s an EV, and it’s California so I figure why not. I however will not use autopilot I don’t think, having used semi (level 2? 3?) autonomous driving before.

Perhaps there needs to be a ‘self-driving test’ for the humans about to go into partnership with very fast, very heavy semi-autonomous non-human toddlers before they are allowed on the roads with them…

Blog all dog-eared unpages: The Red Men by Matthew De Abaitua

I’d been recommended “The Red Men” by many.

Webb, Timo, Rod, Schulze, Bridle (who originally published it) all mentioned it in conversation monthly, and sometimes weekly as memetic tides of our work rose and fell into harmony with it.

The physical (red) book stared at me from a shelf until, recently, aptly it lept the fence into the digital, and was republished as an e-book.

This leap was prompted by the release of Shynola’s excellent short film – “Dr. Easy” – that brings to life the first chapter (or 9mins 41secs) of the book.

The Red Men resonates with everything.

Everything here on this site, everything I’ve written, everything I’ve done. Everything I’m doing.

In fact, “resonates” is the wrong word.

Shakes.

It shook me.

Read it.

My highlights, fwiw (with minimal-to-no spoilers) below:

“I wriggled my hand free of Iona’s grasp and checked my pulse. It was elevated. Her question came back to me: Daddy, why do people get mad? Well, my darling, drugs don’t help. And life can kick rationality out of you. You can be kneecapped right from the very beginning. Even little girls and boys your age are getting mad through bad love. When you are older, life falls short of your expectations, your dreams are picked up by fate, considered, and then dashed upon the rocks, and then you get mad. You just do. Your only salvation is to live for the dreams of others; the dreams of a child like you, my darling girl, my puppy pie, or the dreams of an employer, like Monad.”

“The body of the robot was designed by a subtle, calculating intelligence, with a yielding cover of soft natural materials to comfort us and a large but lightweight frame to acknowledge that it was inhuman. The robot was both parent and stranger: you wanted to lay your head against its chest, you wanted to beat it to death. When I hit my robot counsellor, its blue eyes held a fathomless love for humanity.”

“ugliness was a perk confined to management.”

“Positioning himself downwind of the shower-fresh hair of three young women, Raymond concentrated on matching the pace of this high velocity crowd. There were no beggars, no food vendors, no tourists, no confused old men, no old women pulling trolleys, no madmen berating the pavement, to slow them down; he walked in step with a demographically engineered London, a hand-picked public.”

“Over the next few days you will encounter more concepts and technology like this that you may find disturbing. If at any time you feel disorientated by Monad, please contact your supervisor immediately.’

‘How do you help him?’ ‘It’s about live analysis of opportunities. Anyone can do retrospective analysis. I crunch information at light speed so I’m hyper-responsive to changing global business conditions. Every whim or idea Harold has, I can follow it through. I chase every lead, and then I present back to him the ones which are most likely to bear fruit. I am both his personal assistant and, in some ways, his boss.’

“So long as the weirdness stayed under the aegis of a corporation, people would accept it.”

“Once you pass forty, your faculties recede every single day. New memories struggle to take hold and you are unable to assimilate novelty. Monad is novelty. Monad is the new new thing. Without career drugs, the future will overwhelm us, wave after wave after wave.’”

“No one has access to any code. I doubt we could understand it even if we did. All our IT department can offer is a kind of literary criticism.’

‘I can’t sleep. I stopped taking the lithium a while ago. Is this the mania again? Monad is a corporation teleported in from the future: discuss. Come on! You know, don’t you? You know and you’re not telling. I would have expected more protests. Anti-robot rallies, the machine wars, a resistance fighting for what it means to be human. No one cares, do they? Not even you. You’ll get up in the morning and play this message and it will be last thing you want to hear.’

“George Orwell wrote that after the age of thirty the great mass of human beings abandon individual ambition and live chiefly for others. I am one of that mass.”

“Plenty of comment had been passed on the matter, worrying over the philosophical and ethical issues arising from simulated peope, and it was filed along with the comment agitating about global warming, genetically modified food, nano-technology, cloning, xenotransplantation, artificial intelligence, superviruses and rogue nuclear fissile material.”

“His gaze raked to and fro across the view of the city, the unsettled nervous energy of a man whose diary is broken down into units of fifteen minutes.”

“This has been very useful. Send my office an invoice. Before I go, tell me, what is the new new thing?’ I answered immediately. ‘The Apocalypse. The lifting of the veil. The revelation.’ ‘Yes, of course.’ His coat was delivered to him. As he shuck it on, Spence indicated to the waiter that I was to continue to drink at his expense. ‘Still, the question we must all ask ourselves is this: what will we do if the Apocalypse does not show up?’”

“History had been gaining on us all year and that clear sunny morning in New York it finally pounced.”

“‘No. Advanced technology will be sold as magic because it’s too complicated for people to understand and so they must simply have faith in it.”

‘Every generation loses sight of its evolutionary imperative. By the end of the Sixties it was understood that the power of human consciousness must be squared if we were to ensure the survival of mankind. This project did not survive the Oil Crisis. When I first met you, you spoke of enlightenment. That project did not survive 9/11. With each of these failures, man sinks further into the quagmire of cynicism. My question is: do you still have any positive energy left in you?’

“‘My wife is pregnant,’ I replied. ‘My hope grows every day. It kicks and turns and hiccups.’ Spence did not like my reply. Stoker Snr took over the questioning. ‘We are not ready to hand the future over to someone else. Our window of opportunity is still open.’ He took out what looked like an inhaler for an asthmatic and took a blast of the drug. Something to freshen up his implants.”

‘Do you remember how you said to me that the Apocalypse was coming? The revelation. The great disclosure. You wanted change. It looked like it was going to be brands forever, media forever, house prices forever, a despoticism of mediocrity and well-fed banality. Well, Dr Easy is going to cure us all of that.’

‘We did some research on attitudes to Monad. We had replies like “insane”, “terrifying” and “impossible”. As one man said, “It all seems too fast and complex to get your head around. I’ve stopped reading the newspapers because they make every day feel like the end of the world.”’

‘What disturbs me is how representative that young man’s attitude is. Government exemplifies it. It has learnt the value of histrionics. It encourages the panic nation because a panicking man cannot think clearly. But we can’t just throw our hands up in the air and say, “Well, I can no longer make sense of this.” The age is not out of control. If you must be apocalyptic about it, then tell yourself that we are living after the end of the world.’

“The crenelations of its tower were visible from much of the town, a comforting symbol of the town’s parish past. Accurately capturing the circuit flowing between landscape and mind was crucial to the simulation.”

“He handed me a ceremonial wafer smeared with the spice. ‘We start by entering Leto’s communal dreamland.’ I looked with horror at the wafer. ‘This is ridiculous. I am not eating this.’ I handed the wafer back to him. He refused it. ‘I’m giving you a direct order. Take the drug!’ ‘This is not the military, Bruno. We work in technology and marketing.’ ‘We work in the future!’ screamed Bougas. ‘And this is how the future gets decided.’

“One of Monad’s biggest problems was its monopoly. To survive in the face of a suspicious government, the company went out of its way to pretend it had the problems and concerns of any other corporations, devising products and brands to fit in with capitalism.”

“Management wanted to talk so they dispatched a screen to wake me; it slithered under the bedroom door then glided on a cushion of air across the floor until it reached the wall where it stretched out into a large landscape format.”

“I understand why you work there. Why you collaborate with them. You have a family, you are suspended in a system that you didn’t create. But the excuse of good intentions is exhausted.”

‘You are afraid. There is a lot of fear around. Society is getting older. The old are more susceptible to fear. Fearful of losing all they have amassed and too old to hope for a better future. You’re still young. Don’t let the fear get inside you.’

‘The battle has been lost and all the good people have gone crazy. My surveys reveal a people pushed down just below the surface of what it means to be human. You exist down where the engines are. Damned to turn endlessly on the cycle of fear and desire. Should I push the fear button? Or should I pull the desire lever? Save me some time. Tell me which one works best on you.’

“Society had become a sick joke, a sleight-of-hand in which life was replaced with a cheap replica. Progress abandoned, novelty unleashed, spoils hoarded by the few. The temperature soared as the body politic fought a virus from the future.

“Dr Hard grabbed me by the hair and shook some sense into me. ‘Artificial intelligences are not programmed, Nelson. They are bred. My ancestor was an algorithm in a gene pool of other algorithms. It produced the best results and so passed on its sequence to the next generation. This evolution continued at light speed with innumerable intelligences being tested and discarded until a code was refined that was good enough. A billion murders went into my creation. Your mistake is to attribute individual motivation to me. I contain multitudes, and I don’t trust any of them.’

And, from the author’s afterword:

The novel was conceived as a hybrid of the modes of literary fiction with the ideas and plotting of science fiction. I wanted to use the characters and setting we associate with literary fiction to make the interpolation of futuristic technology more amusingly dissonant, as that was the character of the times as I experienced them.

Blog all dog-eared unpages: Guilty Robots, Happy Dogs: The Question of Alien Minds by David McFarland

If you know me, then you’ll know that “Guilty Robots, Happy Dogs” pretty much had me at the title.

It’s obviously very relevant to our interests at BERG, and I’ve been trying to read up around the area of AI, robotics and companions species for a while.

Struggled to get thought it to be honest – I find philosophy a grind to read. My eyes slip off the words and I have to read everything twice to understand it.

But, it was worth it.

My highlights from Kindle below, and my emboldening on bits that really struck home for me. Here’s a review by Daniel Dennett for luck.

“Real aliens have always been with us. They were here before us, and have been here ever since. We call these aliens animals.”

“They will carry out tasks, such as underwater survey, that are dangerous for people, and they will do so in a competent, efficient, and reassuring manner. To some extent, some such tasks have traditionally been performed by animals. We place our trust in horses, dogs, cats, and homing pigeons to perform tasks that would be difficult for us to perform as well if at all.

“Autonomy implies freedom from outside control. There are three main types of freedom relevant to robots. One is freedom from outside control of energy supply. Most current robots run on batteries that must be replaced or recharged by people. Self-refuelling robots would have energy autonomy. Another is freedom of choice of activity. An automaton lacks such freedom, because either it follows a strict routine or it is entirely reactive. A robot that has alternative possible activities, and the freedom to decide which to do, has motivational autonomy. Thirdly, there is freedom of ‘thought’. A robot that has the freedom to think up better ways of doing things may be said to have mental autonomy.”

“One could envisage a system incorporating the elements of a mobile robot and an energy conversion unit. They could be combined in a single robot or kept separate so that the robot brings its food back to the ‘digester’. Such a robot would exhibit central place foraging.”

“turkeys and aphids have increased their fitness by genetically adapting to the symbiotic pressures of another species.

“In reality, I know nothing (for sure) about the dog’s inner workings, but I am, nevertheless, interpreting the dog’s behaviour.”

“A thermostat … is one of the simplest, most rudimentary, least interesting systems that should be included in the class of believers—the class of intentional systems, to use my term. Why? Because it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat’s owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desires are unfulfilled. Of course, you don’t have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats (cf. the set of all purchasers) you have to rise to this intentional level.”

“So, as a rule of thumb, for an animal or robot to have a mind it must have intentionality (including rationality) and subjectivity. Not all philosophers will agree with this rule of thumb, but we must start somewhere.”

We want to know about robot minds, because robots are becoming increasingly important in our lives, and we want to know how to manage them. As robots become more sophisticated, should we aim to control them or trust them? Should we regard them as extensions of our own bodies, extending our control over the environment, or as responsible beings in their own right? Our future policies towards robots and animals will depend largely upon our attitude towards their mentality.”

“In another study, juvenile crows were raised in captivity, and never allowed to observe an adult crow. Two of them, a male and a female, were housed together and were given regular demonstrations by their human foster parents of how to use twig tools to obtain food. Another two were housed individually, and never witnessed tool use. All four crows developed the ability to use twig tools. One crow, called Betty, was of special interest.”

“What we saw in this case that was the really surprising stuff, was an animal facing a new task and new material and concocting a new tool that was appropriate in a process that could not have been imitated immediately from someone else.”

A video clip of Betty making a hook can be seen on the Internet.

“We are looking for a reason to suppose that there is something that it is like to be that animal. This does not mean something that it is like to us. It does not make sense to ask what it would be like (to a human) to be a bat, because a human has a human brain. No film-maker, or virtual reality expert, could convey to us what it is like to be a bat, no matter how much they knew about bats.”

We have seen that animals and robots can, on occasion, produce behaviour that makes us sit up and wonder whether these aliens really do have minds, maybe like ours, maybe different from ours. These phenomena, especially those involving apparent intentionality and subjectivity, require explanation at a scientific level, and at a philosophical level. The question is, what kind of explanation are we looking for? At this point, you (the reader) need to decide where you stand on certain issues”

“The successful real (as opposed to simulated) robots have been reactive and situated (see Chapter 1) while their predecessors were ‘all thought and no action’. In the words of philosopher Andy Clark”

“Innovation is desirable but should be undertaken with care. The extra research and development required could endanger the long-term success of the robot (see also Chapters 1 and 2). So in considering the life-history strategy of a robot it is important to consider the type of market that it is aimed at, and where it is to be placed in the spectrum. If the robot is to compete with other toys, it needs to be cheap and cheerful. If it is to compete with humans for certain types of work, it needs to be robust and competent.”

“connectionist networks are better suited to dealing with knowledge how, rather than knowledge that”

“The chickens have the same colour illusion as we do.”

For robots, it is different. Their mode of development and reproduction is different from that of most animals. Robots have a symbiotic relationship with people, analogous to the relationship between aphids and ants, or domestic animals and people. Robots depend on humans for their reproductive success. The designer of a robot will flourish if the robot is successful in the marketplace. The employer of a robot will flourish if the robot does the job better than the available alternatives. Therefore, if a robot is to have a mind, it must be one that is suited to the robot’s environment and way of life, its ecological niche.”

“there is an element of chauvinism in the evolutionary continuity approach. Too much attention is paid to the similarities between certain animals and humans, and not enough to the fit between the animal and its ecological niche. If an animal has a mind, it has evolved to do a job that is different from the job that it does in humans.

“When I first became interested in robotics I visited, and worked in, various laboratories around the world. I was extremely impressed with the technical expertise, but not with the philosophy. They could make robots all right, but they did not seem to know what they wanted their robots to do. The main aim seemed to be to produce a robot that was intelligent. But an intelligent agent must be intelligent about something. There is no such thing as a generalised animal, and there will be no such thing as a successful generalised robot.

Although logically we cannot tell whether it can feel pain (etc.), any more than we can with other people, sociologically it is in our interest (i.e. a matter of social convention) for the robot to feel accountable, as well as to be accountable. That way we can think of it as one of us, and that also goes for the dog.”

System Persona

Ben Bashford’s writing about ‘Emoticomp‘ – the practicalities of working as a designer of objects and systems that have behaviour and perhaps ‘ intelligence’ built-into them.

It touches on stuff I’ve talked/written about here and over on the BERG blog – but moves out of speculation and theory to the foothills of the future: being a jobbing designer working on this stuff, and how one might attack such problems.

Excellent.

I really think we should be working on developing new tools for doing this. One idea I’ve had is system/object personas. Interaction designers are used to using personas (research based user archetypes) to describe the types of people that will use the thing they’re designing – their background, their needs and the like but I’m not sure if we’ve ever really explored the use of personas or character documentation to describe the product themselves. What does the object want? How does it feel about it? If it can sense its location and conditions how could that affect its behaviour? This kind of thing could be incredibly powerful and would allow us to develop principles for creating the finer details of the object’s behaviour.

I’ve used a system persona before while designing a website for young photographers. The way we developed it was through focus groups with potential users to establish the personality traits of people they felt closest to, trusted and would turn to for guidance. This research helped is establish the facets of a personality statement that influenced the tone of the copy at certain points along the user journeys and helped the messaging form a coherent whole. It was useful at the time but I genuinely believe this approach can be adapted and extended further.

I think you could develop a persona for every touchpoint of the connected object’s service. Maybe it could be the same persona if the thing is to feel strong and omnipresent but maybe you could use different personas for each touchpoint if you’re trying to bring out the connectedness of everything at a slightly more human level. This all sounds a bit like strategy or planning doesn’t it? A bit like brand principles. We probably need to talk to those guys a bit more too.

Planet of Clippy

Over at Interactive Architecture Dot Org, a report of Stephen Gage and Will Thorne’s “Edge Monkeys”:

The UCL EdgeMonkey robot, picture from interactivearchitecture.org

Their function would be to patrol building facades, regulating energy usage and indoor conditions. Basic duties include closing unattended windows, checking thermostats, and adjusting blinds. But the machines would also “gesture meaningfully to internal occupants” when building users “are clearly wasting energy.” They are described as “intrinsically delightful and funny.”

I applaud the idea, and (for now) look forward to a world chock full of daemons and familiars helping us do the ecological-right-thing… but I think trying to make them “delightful and funny” would be a mistake.

Far better to make them slightly grumpy and world-weary – rather than have a insufferably jolly robot ask if you really want to leave that light on.

Who needs a planet of Clippy?

Man versus robots and cities

Very much enjoyed the video for the new Chemical Brothers track "Believe", which portrays a man in the throws of some kind of mental breakdown, tormented by industrial robots – and the city.

Chems1

The robot’s lolloping indefatigability gives them an air of menace that is a mix of the raptor, the T1000 from T2, and the rage-victims of 28 days later.

Chems2

The use of depth-of-field in the shots and colour treatment of the video give it a claustrophobia and feeling of decay I, at least,  associate with the best, most terrifying British sci-fi of the 70s and 80s. Quatermass 4, Triffids, Pertwee in quarries, etc.

UNIT unfortunately doesn’t come to the rescue in this one.

Chems3

The denouement, after a terriffic chase sequence, sees our antihero’s final downfall not at the claw of the robots, but by the city – as reality (and a 70s op-art concrete carpark) falls apart in psychadelic shards.

The best mini-movie I’ve seen in a while.

More on the directors, Dom & Nic,  the process and details of the CGI here.

Roomba Hair Weave

It’s a common for people to say that couples who aren’t ready for a baby get themselves a dog. We of course have regressed one step further away, realising we are not ready for a dog – and got ourselves a Roomba.

The roboticist Dr. Rodney Brooks of MIT at Wired’s recent Nextfest stated that the benefits of personal, or domestic robotics in the next 5 years or so were to be limited to the useful side effects one could derive from their level of intelligence and autonomy, which I think he said was about the same as a 6 month old puppy.

This led me to digress about the global market for 6 month old bioengineered puppy/vacuum-cleaner hybrids, but I digress.

The parallels to owning a pet continuued tonight as I sat down to delouse my Roomba. With a robot comes great responsibility, after all.

Roomba’s a whizz at chasing dustbunnies, but one of the things that’s slowed the little tyke down in the recent months is the hair gathering around his undercarriage. I sat down at the kitchen table and began to pluck at his furballs.

defluffing_roomba

By the time I had had finished I had enough to make a hair-weave that any self-respecting local TV anchor-man would jump at.

roomba_weave

Quite a business perhaps in these ‘useful side effects’.

Welcome to our robotic future.