glitch

I was so taken by Nick Bostrom’s article in New Scientist on The Simulation Argument (in which he points out evidence suggests our universe may be a computer simulation), that I wrote a sequence of poems on in it which, amongst other things, I note the detail that the great majority of computer simulations that take place in our civilisation are computer games. Of course, my sequence is not intended to be intellectually rigorous, rather an exploration of its title (“an engineering rush”). In it, incidentally, I prove categorically the universe exists purely for the creation of poetry. Thus the only way to keep the universe going and keep everyone alive and happy is for poets to write oodles of verse, and since the best inspiration for good poetry is lust and love, then pretty young women, for the sake of everyone including themselves, should throw themselves at the nearest poet (‘toyboys to the girlie poets’).

image: abstract

I’m also a professional nerd. A short article by Professor Barrow at Cambridge University in the current New Scientist suggests a lack of awareness of some software engineering techniques. I accept his assumptions that whoever is running our simulation has finite resource, so that crude approximations to a reality are made, but I would point out that such a powerful simulating computer would surely have rollback facilities; even computer games have crude rollback in the form of saving games. If a glitch occurred, the universe could be rolled back and rerun with the problem cancelled.

A sophisticated system would also avoid the problem of glitches by making them non–apparent, making the simulation software self–correcting. The New Scientist gave, as an example, that Disney’s animations, when showing reflections of water, draw a simple glare. When nothing more is needed from the light in that glare, that does fine. But in a sophisticated software simulation, if the difference between the blast of white light for a simple glare, and a blast of white light subjected to analysis became significant, then, and only then, the extra data could be computed to keep the simulation in good order. This is called lazy programming, and is a well established systems technique—dont’t do the work until and unless it’s necessary.

To put it another way, time in a simulated universe is not the same as time for the simulator computer. A computer, going forward in its time, can move the simulated environment backward in simulation time, fix the glitch, and move it forward again, or move across simulation time, jump a few centuries, or whatever. Just because the simulation ought to be reasonably consistent and in a particular order (our time) doesn’t mean it has to be produced in that order. Glitches can be undone in our past when they first occur in our present.

If we assume that we, as the human race, are significant in the simulation and not just an irrelevant consequence, then things need not have been simulated until they’re needed for someone to encounter. Perhaps the first astronomers created the stars rather than discovered them. Whoever first looked up through a telescope in consequence created galaxies so they could be looked at. In other words, the universe would be created because we looked at it. Doesn't this perhaps tie in somewhere with the Copenhagen interpretation of Quantum Mechanics, that quantum events aren't resolved until a conscious being observes them?

image: abstract

Incidentally, for God’s sake don't tell the US patent office about this; otherwise they’ll accept explorers are creators and we’ll have someone patenting all new discoveries. Imagine, a new elementary particle is created because someone decides to look where it should exist, an adept patent lawyer patents the discovered particle just as discovered genes are already patented, the particle turns out to be in every atom, so everyone has to pay licence fees to the lawyer to be legally entitled to have a body.

Another possible consequence of a finite resource simulation comes from noting that in computer games, for example, the resources used by looking at something are far less than the resources used by visiting it. It’s much easier to draw a picture of a town than simulate it for exploration. A low resource simulation might allow us to look at much of the universe, but prevent us from visiting it. Does this suggest the speed of light is there to limit exploration, assuming it’s unavoidable. That doesn’t work for me; it doesn’t actually stop exploration, just makes it a serious engineering challenge. Perhaps the border we cannot break is the edge of the visible universe. Now that suggests a long and interesting future, a nice big play pen.

The article implies, although I suspect this is playful, that apparent inconsistencies in the universe reveal a flaw with the universe itself, rather than in our understanding of it. I must admit a slight lack of faith in the perfection of our current understanding of the universe; I foolishly fear that any inconsistencies between nature and theory might just suggest that theory isn’t quite absolutely perfect just yet.

Rather than looking at the underlying physical structure of the universe, perhaps a better place to look for glitches might be the underlying conceptual structure, to look at mathematics. Maths applies everywhere, to everything, forever (I assume). If this is the case, then a glitch in maths would be much more difficult to rollback; effectively the universe would have to be restarted to be fixed. Of course, this assumes that the language of Mathematics is more than just another human language, with human linguistic weaknesses. Does Godel’s theorum suggest a glitch? Could his theorum be reinterpreted, so that rather than it showing that some things are unknowable, it shows that, yes, sometimes 1 really does equal 2, and there is a maths glitch.