Blog all dog-eared unpages: Guilty Robots, Happy Dogs: The Question of Alien Minds by David McFarland

If you know me, then you’ll know that “Guilty Robots, Happy Dogs” pretty much had me at the title.

It’s obviously very relevant to our interests at BERG, and I’ve been trying to read up around the area of AI, robotics and companions species for a while.

Struggled to get thought it to be honest – I find philosophy a grind to read. My eyes slip off the words and I have to read everything twice to understand it.

But, it was worth it.

My highlights from Kindle below, and my emboldening on bits that really struck home for me. Here’s a review by Daniel Dennett for luck.

“Real aliens have always been with us. They were here before us, and have been here ever since. We call these aliens animals.”

“They will carry out tasks, such as underwater survey, that are dangerous for people, and they will do so in a competent, efficient, and reassuring manner. To some extent, some such tasks have traditionally been performed by animals. We place our trust in horses, dogs, cats, and homing pigeons to perform tasks that would be difficult for us to perform as well if at all.

“Autonomy implies freedom from outside control. There are three main types of freedom relevant to robots. One is freedom from outside control of energy supply. Most current robots run on batteries that must be replaced or recharged by people. Self-refuelling robots would have energy autonomy. Another is freedom of choice of activity. An automaton lacks such freedom, because either it follows a strict routine or it is entirely reactive. A robot that has alternative possible activities, and the freedom to decide which to do, has motivational autonomy. Thirdly, there is freedom of ‘thought’. A robot that has the freedom to think up better ways of doing things may be said to have mental autonomy.”

“One could envisage a system incorporating the elements of a mobile robot and an energy conversion unit. They could be combined in a single robot or kept separate so that the robot brings its food back to the ‘digester’. Such a robot would exhibit central place foraging.”

“turkeys and aphids have increased their fitness by genetically adapting to the symbiotic pressures of another species.

“In reality, I know nothing (for sure) about the dog’s inner workings, but I am, nevertheless, interpreting the dog’s behaviour.”

“A thermostat … is one of the simplest, most rudimentary, least interesting systems that should be included in the class of believers—the class of intentional systems, to use my term. Why? Because it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat’s owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desires are unfulfilled. Of course, you don’t have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats (cf. the set of all purchasers) you have to rise to this intentional level.”

“So, as a rule of thumb, for an animal or robot to have a mind it must have intentionality (including rationality) and subjectivity. Not all philosophers will agree with this rule of thumb, but we must start somewhere.”

We want to know about robot minds, because robots are becoming increasingly important in our lives, and we want to know how to manage them. As robots become more sophisticated, should we aim to control them or trust them? Should we regard them as extensions of our own bodies, extending our control over the environment, or as responsible beings in their own right? Our future policies towards robots and animals will depend largely upon our attitude towards their mentality.”

“In another study, juvenile crows were raised in captivity, and never allowed to observe an adult crow. Two of them, a male and a female, were housed together and were given regular demonstrations by their human foster parents of how to use twig tools to obtain food. Another two were housed individually, and never witnessed tool use. All four crows developed the ability to use twig tools. One crow, called Betty, was of special interest.”

“What we saw in this case that was the really surprising stuff, was an animal facing a new task and new material and concocting a new tool that was appropriate in a process that could not have been imitated immediately from someone else.”

A video clip of Betty making a hook can be seen on the Internet.

“We are looking for a reason to suppose that there is something that it is like to be that animal. This does not mean something that it is like to us. It does not make sense to ask what it would be like (to a human) to be a bat, because a human has a human brain. No film-maker, or virtual reality expert, could convey to us what it is like to be a bat, no matter how much they knew about bats.”

We have seen that animals and robots can, on occasion, produce behaviour that makes us sit up and wonder whether these aliens really do have minds, maybe like ours, maybe different from ours. These phenomena, especially those involving apparent intentionality and subjectivity, require explanation at a scientific level, and at a philosophical level. The question is, what kind of explanation are we looking for? At this point, you (the reader) need to decide where you stand on certain issues”

“The successful real (as opposed to simulated) robots have been reactive and situated (see Chapter 1) while their predecessors were ‘all thought and no action’. In the words of philosopher Andy Clark”

“Innovation is desirable but should be undertaken with care. The extra research and development required could endanger the long-term success of the robot (see also Chapters 1 and 2). So in considering the life-history strategy of a robot it is important to consider the type of market that it is aimed at, and where it is to be placed in the spectrum. If the robot is to compete with other toys, it needs to be cheap and cheerful. If it is to compete with humans for certain types of work, it needs to be robust and competent.”

“connectionist networks are better suited to dealing with knowledge how, rather than knowledge that”

“The chickens have the same colour illusion as we do.”

For robots, it is different. Their mode of development and reproduction is different from that of most animals. Robots have a symbiotic relationship with people, analogous to the relationship between aphids and ants, or domestic animals and people. Robots depend on humans for their reproductive success. The designer of a robot will flourish if the robot is successful in the marketplace. The employer of a robot will flourish if the robot does the job better than the available alternatives. Therefore, if a robot is to have a mind, it must be one that is suited to the robot’s environment and way of life, its ecological niche.”

“there is an element of chauvinism in the evolutionary continuity approach. Too much attention is paid to the similarities between certain animals and humans, and not enough to the fit between the animal and its ecological niche. If an animal has a mind, it has evolved to do a job that is different from the job that it does in humans.

“When I first became interested in robotics I visited, and worked in, various laboratories around the world. I was extremely impressed with the technical expertise, but not with the philosophy. They could make robots all right, but they did not seem to know what they wanted their robots to do. The main aim seemed to be to produce a robot that was intelligent. But an intelligent agent must be intelligent about something. There is no such thing as a generalised animal, and there will be no such thing as a successful generalised robot.

Although logically we cannot tell whether it can feel pain (etc.), any more than we can with other people, sociologically it is in our interest (i.e. a matter of social convention) for the robot to feel accountable, as well as to be accountable. That way we can think of it as one of us, and that also goes for the dog.”

Exporting the past into the future, or, “The Possibility Jelly lives on the hypersurface of the present”

Warning – this is a collection of half-formed thoughts, perhaps even more than usual.

I’d been wanting to write something about Google Latitude, and other location-sharing services that we (Dopplr) often get lumped in with for a while. First of all, there was the PSFK Good Ideas Salon, where I was thinking about it (not very articulately) then shortly after that Google Latitude was announced, in a flurry of tweets.

At the time I myself blurted:

twitter-_-matt-jones_-i-still-maintain-perhaps

My attitude to most Location-Based Services (or LBS in the ancient three-letter-acronymicon of the Mobile Industry) has been hardened by sitting through umpty-nine presentations by the white-men-in-chinos who maintain a fortune can be made by the first company to reliably send a passer-by a voucher for a cheap coffee as they drift past *bucks.

It’s also been greatly informed by working and talking with my esteemed erstwhile colleague Christopher Heathcote who gave a great presentation at Etech (5 years ago!!! Argh!) called “35 ways to find your location“, and has both at Orange and Nokia been in many of the same be-chino’d presentations.

me_home_work1Often, he’s pointed out quite rightly, that location is a matter of routine. We’re in work, college, at home, at our corner shop, at our favourite pub. These patterns are worn into our personal maps of the city, and usually it’s the exceptions to it that we record, or share – a special excursion, or perhaps a unexpected diversion – pleasant or otherwise that we want to broadcast for companionship, or assistance.

Also, most of the time – if I broadcast my location to trusted parties such as my friends, they may have limited opportunity to take advantage of that information – they after all are probably absorbed in their own routines, and by the time we rendevous, it would be too late.

Location-based services that have worked with this have had limited success – Dodgeball was perhaps situated software after all, thriving in a walkable bar-hopping subculture like that of Manhattan or Brooklyn, but probably not going to meet with the same results worldwide.

This attitude carried through to late 2006/early 2007 and the initial thinking for Dopplr – that by focussing on (a) nothing more granular than cities-as-place and days-as-time and (b) broadcasting future intention, we could find a valuable location-based service for a certain audience – surfacing coincidence for frequent travellers.

Point (a): taking cities and days as the grain of your service, we thought was the sweet-spot. Once that ‘bit’ of information about the coincidence has been highlighted and injected into whichever networks you’re using, you can use those networks or other established communications methods to act on it: facebook, twitter, email, SMS or even, voice…

“Cities-and-days” also gave a fuzziness that allowed for flexibility and, perhaps plausible deniablity – ameliorating some of the awkwardness that social networks can unitentionally create (we bent over backwards to try and avoid that in our design decisions, with perhaps partial success)

In the latest issue of Wired, there’s a great example of the awkward situations broadcasting your current exact location could create:

“I explained that I wasn’t actually begging for company; I was just telling people where I was. But it’s an understandable misperception. This is new territory, and there’s no established etiquette or protocol.

This issue came up again while having dinner with a friend at Greens (37.806679 °N, 122.432131 °W), an upscale vegetarian restaurant. Of course, I thought nothing of broadcasting my location. But moments after we were seated, two other friends—Randy and Cameron—showed up, obviously expecting to join us. Randy squatted at the end of the table. Cameron stood. After a while, it became apparent that no more chairs would be coming, so they left awkwardly. I felt bad, but I hadn’t really invited them. Or had I?”

It also seemed like a layer in a stack of software enhancing the social use and construction of place and space – which we hoped would ‘handover’ to other more appropriate tools and agents in other scales of the stack. This hope became reinforced when we saw a few people taking to prefacing twitters broadcasting where they were about to go in the city as ‘microdopplr‘. We were also pleased to see the birth of more granular intention-broadcasting services such as Mixin and Zipiko, also from Finland

This is also a reason that we were keen to connect with FireEagle (aside from the fact that Tom Coates is a good friend of both myself and Matt B.) in that it has the potential to act as a broker between elements in the stack, and in fact help, weave the stack in the first place. At the moment, it’s a bit like being a hi-fi nerd connecting hi-specification separates with expensive cabling (for instance, this example…), but hopefully an open and simple way to control the sharing of your whereabouts for useful purposes will emerge from the FE ecosystem or something similar.

Point (b) though, still has me thinking that sharing your precise whereabouts – where you are right now, has limited value.

lightcone_slideThis is a slide I’ve used a lot when giving presentations about Dopplr (for instance, this one last year at IxDA)

It’s a representation of an observer moving through space and time, with the future represented by the ‘lightcone’ at the top, and the past by the one at the bottom.

I’ve generally used it to emphasise that Dopplr is about two things – primarily optimising the future via the coincidences surfaced by people sharing their intended future location with people they trust, and secondly, increasingly – allowing you to reflect on your past travels with visualisations, tips, statistics and other tools, for instance the Personal Annual Reports we generated for everyone.

It also points out that the broadcasting of intention is something that necessarily involves human input – it can’t be automated (yet)- more on which later.

By concentrating on the future lightcone, sharing one’s intentions and surfacing the potential coincidences, you have enough information to make the most of them – perhaps changing plans slightly in order to maximise your overlap with a friend or colleague. It’s about wiggling that top lightcone around based on information you wouldn’t normally have in order to make the most of your time – at the grain of spacetime Dopplr operates at.

Google Latitude, Brightkite and to an extent FireEagle have made mee think a lot about the grain of spacetime in such services, and how best to work with it in different contexts. Also, I’ve been thinking about cities a lot, in preparation for my talk at Webstock this week – and inspired by Adam‘s new book, Dan’s ongoing mission to informationally refactor the city and the street, Anne Galloway and Rob Shield’s excellent “Space and culture” blog and the work of many others, including neogeographers-par-excellance Stamen.

I’m still convinced that hereish-and-soonish/thereish-and-thenish are the grain we need to be exploring rather than just connecting a network of the pulsing ‘blue-dot’.

Tom Taylor gave voice to this recently:

“The problem with these geolocative services is that they assume you’re a precise, rational human, behaving as economists expect. No latitude for the unexpected; they’re determined to replace every unnecessary human interaction with the helpful guide in your pocket.

Red dot fever enforces a precision into your design that the rest must meet to feel coherent. There’s no room for the hereish, nowish, thenish and soonish. The ‘good enough’.

I’m vaguely tempted to shutdown iamnear, to be reborn as iamnearish. The Blue Posts is north of you, about five minutes walk away. Have a wander around, or ask someone. You’ll find it.”

My antipathy to the here/now fixation in LBS lead me to remix the lightcone diagram and post it to flickr, ahead of writing this ramble.

The results of doing so delighted and surprised me.

Making the most of hereish and nowish

In retrospect, it wasn’t the most nuanced representation of what I was trying to convey – but it got some great responses.

There was a lot of discussion around whether the cones themselves were the right way to visualise spacetime/informational futures-and-pasts, including my favourite from the ever-awesome Ben Cerveny:

“I think I’d render the past as a set of stalactites dripping off the entire hypersurface, recording the people and objects with state history leaving traces into the viewers knowledgestream, information getting progressively less rich as it is dropped from the ‘buffers of near-now”

Read the entire thread at Flickr – it gets crazier.

But, interwoven in the discussion of the Possibility Jellyfish, came comments about the relative value of place-based information over time.

Chris Heathcote pointed out that sometimes that pulsing blue dot is exactly what’s needed to collapse all the ifs-and-buts-and-wheres-and-whens of planning to meet up in the city.

Blaine pointed out that

“we haven’t had enough experience with the instantaneous forms of social communication to know if/how they’re useful.”

but also (I think?) supported my view about the grain of spacetime that feels valuable:

“Precise location data is past its best-by date about 5-10 minutes after publishing for moving subjects. City level location data is valuable until about two hours before you need to start the “exit city” procedures.”

Tom Coates, similarly:

“Using the now to plan for ten minutes / half an hour / a day in the future is useful, as is plotting and reflecting on where you’ve been a few moments ago. But on the other hand, being alerts when someone directly passes your house, or using geography to *trigger* things immediately around you (like for example actions in a gaming environment, or tool-tips in an augmented reality tool, or home automation stuff) requires that immediacy.”

He also pointed out my prejudice towards human-to-human sharing in this scenario:

“Essentially then, humans often don’t need to know where you are immediately, but hardware / software might benefit from it — if only because they don’t find the incoming pings distracting and can therefore give it their full and undivided attention..”

Some great little current examples of software acting on exact real-time location (other than the rather banal and mainstream satnav car navigation) are Locale for Android – a little app that changes the settings of your phone based on your location, or iNap, that attempts to wake you up at your rail or tube stop if you’ve fallen asleep on the commute home.

But to return to Mr. Coates.

Tom’s been thinking and building in this area for a long time – from UpMyStreet Conversations to FireEagle, and his talk at KiwiFoo on building products from the affordances of real-time data really made me think hard about here-and-now vs hereish-and-nowish.

Tom at Kiwifoo

Tom presented some of the thinking behind FireEagle, specifically about the nature of dealing with real-time data in products an services.

In the discussion, a few themes appeared for me – one was that of the relative-value of different types of data waxing and waning over time, and that examining these patterns can give rise to product and service ideas.

Secondly, it occured to me that we often find value in the second-order combination of real-time data, especially when visualised.

Need to think more about this certainly, but for example, a service such as Paul Mison’s “Above London” astronomical event alerts would become much more valuable if combined with live weather data for where I am.

Thirdly, bumping the visualisation up-or-down a scale. In the discussion at KiwiFoo I cited Citysense as an example of this – which Adam Greenfield turned me onto –  where the aggregate real-time location of individuals within the city gives a live heatmap of which areas are hot-or-not at least in the eyes of those who participate in the service.

From the recent project I worked on at The Royal College of Art, Hiromi Ozaki’s Tribal Search Engine also plays in this area – but almost from the opposite perspective: creating a swarming simulation based on parameters you and your friends control to suggest a location to meet.

I really want to spend more time thinking about bumping things up-and-down the scale: it reminds me of one of my favourite quotes by the Finnish architect Eliel Saarinen:

demons029

And one of my favourite diagrams:

brand_keynote_400

It seems to me that a lot of the data being thrown off by personal location-based services are in the ‘fashion’ strata of Stewart Brand’s stack. What if we combined it with information from the lower levels, and represented it back to ourselves?

Let’s try putting jumper wires across the strata – circuit-bending spacetime to create new opportunities.

Finally, I said I’d come back to the claim that you can’t automate the future – yet.

twitter-_-matt-jones_-kiwifoo-plasticbaguk_sIn the Kiwifoo discussion, the group referenced the burgeoning ability of LBS systems to aggregating patterns of our movements.

One thing that LBS could do is serve to create predictive models of our past daily and weekly routines – as has been investigated by Nathan Eagle et al in the MIT Reality Mining project.

I’ve steered clear of the privacy implications of all of this, as it’s such a third-rail issue, but as I somewhat bluntly put it in my lightcone diagram the aggregation of real-time location information is currently of great interest to spammers, scammers and spooks – but hopefully those developing in this space will follow the principles of privacy, agency and control of such information expounded by Coates in the development of FireEagle and referenced in our joint talk “Polite, pertinent and pretty” last year.

The downsides are being discussed extensively, and they are there to be sure: both those imagined, unimagined, intended and unintended.

But, I can’t help but wonder – what could we do if we are given the ability to export our past into our future…?

Reblog this post [with Zemanta]

Symbolic reboots

New Building, New Bills, originally uploaded by bryanboyer.

Bryan Boyer’s master of architecture thesis project blew me away when I came across it today.
It’s called “Reclaiming Utopia” – an imagined new Capitol for the US government. It seems well-considered and suitably imposing – but the killer part is not the building I think.

For me it’s the almost science-fictional level of world-building touches around his project of new currency featuring the building, folk art and even commemorative plates.

Puts you in mind of Paul Verhoven’s ad breaks in RoboCop and Starship Troopers in terms of really convincing peripheral visions of a world.

It puts you in a future-fictional America where something or someone has caused and completed a reboot of the Union’s sacred symbols.

Practical Mirrorworlds

Back when I was an architecture student, twelve (ahem!) or so years ago, one of the books I read with most lasting impact was “MirrorWorlds” by David Gelertner, the computer scientist perhaps most famous for being targeted and injured by the Unabomber.

In “MirrorWorlds”, Gelertner imagines powerful software providing models and simulations of the ‘real world’ and the change in our understanding and society that will arise from that.

Amazon.com’s page on the book says this by way of synopsis:

Imagine looking at your computer screen and seeing reality–an image of your city, for instance, complete with moving traffic patterns, or a picture that sketches the state of an entire corporation at this second. These representations are called Mirror Worlds, and according to David Gelernter they will soon be available to everyone. Mirror Worlds are high-tech voodoo dolls: by interacting with the images, you interact with reality. Indeed, Mirror Worlds will revolutionize the use of computers, transforming them from (mere) handy tools to crystal balls which will allow us to see the world more vividly and see into it more deeply.

Creating ‘mirrorworlds’ has long been a dream that we can see repeated in the history of ideas, from Buckminster-Fuller’s World Game to the 1:1 scale map commissioned by Borges’ ficitonal emperor:

“…In that Empire, the Cartographer’s art achieved such a degree of perfection that the Map of a single Province occupied an entire City, and the Map of the Empire, an entire Province. In time, these vast Maps were no longer sufficient. The Guild of Cartographers created a Map of the Empire, which perfectly coincided with the Empire itself.”

Recently, Larry and Sergey, our current information-emperors released Google Maps into the world.

Google Maps is an incredibly refined user experience, combining a number of valuable datasets that Google acquired, a great UI utilising cutting edge interface-code thinking, and as you’d expect some very efficent back-end technology.

This being the age of “Web 2.0” every application of merit is also invariably a platform whether it plans to be on not, so Google Maps has spawned some amazing innovations by its users: like Jon Udell’s audio-annotated maps [see also Charlie Schick from Lifeblog’s thoughts on this type of ‘life-recording’], and the fantastic Craigslist Housing Hack, which is a powerfully useful merger of small-ads listings for accomodation with the google maps interface .

Google, a few weeks after launching the Maps service, integrated satellite imagery from it’s acquisition of Keyhole. Again, the user-base seized upon this and started making their own uses, and moreoever, using this to tell stories.

Take a look at Flickr’s Memory Map group [See this Wired News story for more on the meme], where people are using Google Maps to tell stories about childhood, where they grew up or memorable events. Another trends is exemplifed by MezzoBlue’s post “Google Maps and Accountability” where the satellite imagery is used to illustrate the extent of environmental damage done by the forestry industry in British Columbia.

Salk Institute, La Jolla, California
^ Google Maps Satellite Image of La Jolla, California

And thus, a new view on the world around which you can inspire new thinking or action.

Which is precisely the promise of software simulation and modelling proposed by Gelertner a decade ago in Mirrorworlds.

Google’s mission statement “to organize the world’s information and make it universally accessible and useful.” is rapidly creating practical mirrorworlds for us to explore.

Imagine a future Google mirror world, which:

  • Is real-time:
    with live satellite imagery showing weather, jet-streams, pollutant flow, traffic jams, cattle herds, refugee camps…
  • Has overlays:
    showing visualisations of abstract data – population density, energy use, wealth, “now playing”, infant mortality

And crucially…

  • Has history:
    Could show all the above, either from archive data, or simulation ansed on the historical record – from 1970, 1940, 1900, 1800, 1600… etc… etc…

What realisations and reactions would we have if we could gaze into this mirrorworld knowing it was real, not a simEarth, and further more – the only one we’ve got?

It would be the software-equivalent of when the space program in the late-sixties afforded us the first view back at the pale blue dot we’re stuck on.

We are the first ‘simulation-generation’ – we are used to constructing and manipulating ever more sophisticated models of reality or unreality on our personal computers.

Increasingly, what has started out in the mirrorworld of play that the videogame industry invents is revolutionising how we work and learn.

In their book “Got Game”, business strategists John Beck and Mitchell Ward state:

It’s the central secret of digital gaming… Games are providing real, valuable experience… [they] offer real experience solving problems that, however, fantastic their veneers, seem real to the player. When gamers head off to play, they are escaping. But… they end up in an odd-looking educational environment.”

And in education, PC-pioneers like Alan Kay are pursuing ‘mirrorworld’ like learning environments, driven by the mantra that “point-of-view is worth 80 IQ points”

Mobile mirrorworlds could give you that IQ boost Kay is working towards wherever you were. Augmented-reality researchers have been donning back-packs full of computers and ridiculous looking head-up displays for a decade or so, trying to build them.

A more practical, accesible version is being built bottom-up using cameraphones, web-services and primative locative technologies right now. Not only Google – but Yahoo, Amazon/a9 and a host of hackers and start-ups are setting about skinning the world in data.

Once these substrates are there, you can bet that manipulable models and visualisations will be built atop them. This is the other component of the mirrorworld: the “what-if” wonderlands you can explore with
a software model of reality.

Why we make models

We have always built models to understand how reality works – by taking them to breaking point, changing our approach, exploring the alternatives; we’ve made progress. We’re up against real problems casued by that progress – many parts of the pale blue dot are at breaking point – so making better decisions based on better models is crucial. Mirrorworlds are not just a playful diversion or powerful business tool – but a survival strategy.

Before I get too misty-eyed for the mirrorworld future of sustainability, happiness and harmony, a (hyper)reality check. There are some powerful players switched-on to the power of simulation.

Bill Gates and Microsoft are actively pursuing it

“modeling is pretty magic stuff, whether it’s management problems or business customization problems or work-flow problems, visual modeling. It’s probably the biggest thing going on”

What happens to our understanding of reality when there’s a monopoly on mirrorworlds?

French philosopher Baudrillard, in his “Simulation and simulcra” (You’ve read it right? I bet BillG has…;-) in reference to the mirrorworld mapping of Borges’ emperor, warned that:

“Abstraction today is no longer that of the map, the double, the mirror or the concept. Simulation is no longer that of a territory, a referential being or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory – precession of simulacra – it is the map that engenders the territory and if we were to revive the fable today, it would be the territory whose shreds are slowly rotting across the map. It is the real, and not the map, whose vestiges subsist here and there, in the deserts which are no longer those of the Empire, but our own. The desert of the real itself.”

In order to see that our new digital/real mirrorworlds reverse the Baudrillian desertification of the real world, it is crucial that we can not only understand the territory, but who and how they have done the mapping and their modelling – that we can own it, examine it and remap/remodel it ourselves – that the maps and models are open, free and shared by all.

The peer-production and scrutiny of the Wikipedia (and indeed the ideological discussion around it) might give some idea of what it’s like to create a reference work of this kind.

Then we will have mirrorworlds that we can all build on.

====
A side note – as part of Amazon’s mirrorworld of the printed word, they have introduced SIPs: “statistically-improbable phrases” that are supposed to automagically sum up the essence of a book. Here are the SIPs for “MirrorWorlds”:

chronicle streams, software ensembles, computational landscape, task cloud, simulated mind, evocative possibility, ensemble programs, tuple spaces, information machinery, mass border, software revolution, memory pool, software machine, information machines