Ubicomp and science fiction writers

http://www.3pointd.com/20060908/vernor-vinge-paints-the-future-at-agc/#more-700

Vernor Vinge Paints the Future at AGC

Posted Friday, September 8th, 2006, at 4:54 pm Eastern by Mark Wallace
Tags: 3D Web, 3pointD, events, games, metaverse, mobile computing, RFID, Social software, Technology, Virtual Reality

Award-winning science fiction Vernor Vinge, speaking at the Austin Game Conference, gave his vision of a future in which connectivity was literally in the air around us. Author of Rainbows End most recently, Vinge painted a picture of ubiquitous connectivity similar to the one narrated in that novel. So well connected will we be, according to Vinge, that “post-human” capabilities will arise from groups of people networked together. “It will be a very glorious thing to be an early post-human artist,” Vinge said.

“Virtually every aspect of purpose, faith and fantasy could have a constituency in such a world.” His vision was compelling, though it remains to be seen how quickly it will be realized, or whether the discrete functions Vinge described won’t more likely take fuzzier forms as they come about. Below a transcript of his remarks:

It’s great to be able to talk to people who are actually doing things and making all of this stuff happen. What I want to talk about today is a scenario that largely seems very likely and has a certain planning utility in addition to making good stories. I want to talk about the hardware that [may bring about] ubiquitous computing. Now, ubiquitous computing is sort of a slippery term. As with treason, it’s mainly a matter of dates.

In thinking about this, I have several steps or types of technology that lead into it. First of all, starting with the 1980s, we have embedded systems, things like microcontrollers in our typewriters. It’s a great economic win, because it allows us to substitute software for moving parts engineering, and so embedded microprocessors at this point are pretty ubiquitous, to the point that it can be kind of scary. Now, we’re entering an era of networked embedded systems, of devices able to talk to each other and to us.

This is a path that we are accellerating down because again it has a great economic win. The stuff that’s coming up on the near horizon with this is RFIDs, but not just RFIDs, but smart RFIDs, which will not only have embedded microprocessors in large discrete devices, but in more or less throwaway devices, and also in standalone situations. That gets us into what I think most of us have heard about, like smart dust and MEMS [Micro-Electro-Mechanical Systems].

You can imagine such ubiquity being then hooked up with sensors and effectors. An added feature to make this really turn into the sort of effectiveness it could be is the notion of localizers. In its simplest conceptual form, these are simply a feature that is on networked embedded processors, whereby the processor knows where it is in 3D space.

In principle, that actually is very easy. You don’t even need GPS, simply if you have lots of them, thousands in this room scattered around as an ad hoc network, they can figure out their relative position to the other nodes. And in fact they can know where things are outside of this room if the world as a whole is hooked up this way.

Think about what that would mean. It actually elimates whole industries. It eliminates hundreds of different locational technologies. Almost all the moving parts machinery we have and coordination of moving parts machinery involves either having humans know how to position the parts or a wide variety of technologies working together.

Hook that up with the issue of communciations, and we actually have very interesting solutions for getting results out to end points. If you know exactly where things are, not only can you make use of the ultra-wideband that we already are moving into, but you could even imagine using very good localizer technology to set up extremely high bit-rate lengths that were highly directional.

What this means is that considering that the overall backbone of communication is still there, we’re still making use of very, very large pipes to send information long distances. Localizers allow you on case by case basis to extend the capacity of those pipes down to the finest end point that you could want, and to do it in an ad hoc and real time way.
A problem that really hasn’t shown up very much in our era but ultimately could be really interesting, that is the problem of node guano. If you have lot of ad hoc nodes, in a situation where nodes don’t last forever, ultimately we could be hip deep in dead nodes.

In this environment, wearable computers are to the embedded networks what the PCs have been to our Internet. and this is something that you could imagine working out very smoothly, almost seamlessly, and that’s already accepted by consumers.

I am convinced that the day we really get high resolution heads up displays, most people who nowadays are carrying a bluetooth earphone and microphone would have no problem with wearing eyeglasses that gave them a heads up display of something like 4,000 by 4,000 if the infrastructure had moved along in concert. Then high resolution HUDs could be exploited.

That’s an example of a highly disruptive technology. It essentially destroys all other display technology except as emergency backups.
If you were able to get localization that was really good, you could imagine setting this up so that if your wearbale knew where you were looking, what the orientation of your head was and where your eyeballs were tracking, then in addition to being able to produce the world’s best display, as good as the worlds’ best desktop display, you could actually overlay things in the environment.

The term for that in academic circles is augmented reality. In that situation, having the processing power that’s involved with the network infrastructure I just described becomes very very useful, because you could in an ad hoc way overlay those portions of reality that you wanted to.
In an auditorium like this you could make the walls look like whatever you wanted, you could make the speaker look like a clown, and since everything was networked, you and your friends could get together and agree on what things looked like. The notion of consensual imaging becomes very very important, and again this is actually a very disruptive technology, if it were finally to happen. It blows away all discussion of large three-dimensional display technologies.

I want to talk about something that ultimately for first time really does go a long way to kill off theaters as seperate architectural structures.
If you take together all of the things I have been pushing here, there really is a situation where cyberspace has leaked into the real world, in fact the title of the talk was Inside Out, which was intended to convey the notion of what was inside box in all eras up to ours, in this sort of era is outside. In other words, reality can be whatever the software people choose to make it, and the people operating in the outside real world choose it to be.

So both cinema and games become something that are totally immersive at all times and at all places where the user wishes them to be. It might be, even if the user does not wish it to be, if his wearable gets hijacked and he persists in using his wearable may be subjected to things that aren’t there.

Wearables are the interface to it, but the situation with the network as a whole is very interesting, it hasn’t gotten rid of big pipes or server farms, however we would be in a situation where reality has become its own database, in the sense that objects in the outside world, millions of them would know what they are, know where they are, know where their nearest neighbors are, and can talk to their nearest neighbors and by extenstion to anything in the world.

So in a sense the real world has awakened at that point, not in the sense of humanly intelligent, but in the sense we talk about smart phones and devices, sort of ubiquitous and ubiqitously networked.

(((etc etc etc)))