Imagine if you could do a 3d scan of various objects at a certain resolution. Say 1 megapixels (1 million pixels) per square centimeter. Then you took those various objects and placed them all in a 3d enviornment. You would end up with an enviornment that was several billion pixels in size. In order to be able to see/interact in that enviroment in a realisitc way how well would you need it displayed to you?
Recently I started thinging about this. The real world is very detailed. Right down to the atomic, subatomic levels, string theory, ect. But the human eye isn’t capable of rendering objects that small all by itself. So in a virtual enviornment do you need that level of detail?
Short answer, no. Truth be told those are the elements that make up the fabric of our universe. But in a digital world you go into it knowing full well it’s made up of pixels. You’re more interested in a the broader aspect rather then the small detail.
So if you step into a virtual reality and your standing in a room. This room being comprised of billions of pixels. From a distance most of the objects contain detail but you’re hard pressed to see it all. You move closer to an object and the detail starts to jump out at you more. But everything else slowly moves out of the picture or becomes lost to you as a blob of blurry background.
I have 20/20 vision myself and I can only get as close as about 4 or 5 inches to an objects and still be able to focus in on it. So the object that gets scanned only needs a resolution that is that great.
The human eye is really an amazing thing. It is a continuously scanning mechanism. Different parts of the eye are updated at different times. Unlike motion pictures which shoot 24 complete images per second, the human eye is fluid in it’s updating.
The eye does see only a small portion of the world in focus at one time. If you focus in on an object you’ll see that the objects immediately around are somewhat in focus, around them others are even less in focus and the further away from the object you’re focusing on the softer your vision gets. Or the closer you move to an object everything around it falls further out of focus.
The brain plays a big roll in stitching the world together all around you. As your eye constantly scans your subconscience is telling you that you can see more then you really can. By see I mean focus on. Because you already have memory of your surroundings.
This new train of thought has lead me to believe that the human eye may be less powerful then previously believed. In trying to relate it to computer and digital photography technology it may contain less pixels of resolving power then some people think. Though that doesn’t mean a whole lot. It’s still far greater with color, light sensitivity and motion then anything we have today. But when it comes to creating a digital landscape we may need less display resolution then we think.
The simple logic is this:
the 3d world is full of detail but the eye sees only a portion of that detail at a time. The brain fills in the blanks. So although you can only see that glass of water in perfect focus you know that yellow blob to the left and behind it is a book because you just were looking at it.
If the digital world is comprised of these elements, a form of compression may be demised where the object in focus retains detail and other objects around it can contain less detail. Less detail means less information. Hence less information needed to be sent through the bandwidth. The information would have to change fast because of how fas the eye can refocus on different objects. But the human eye itself wouldn’t need all the information at once. The human brain would do the work of constructing an image of what the virtual world looked like through memory. The virutal world itself would just need to find a way to track the human eye and shift detail to what is being looked at in focus.
It seems a perfectly valid concept. Whether or not it would prove to be a valid addition of virtual technology is yet to be discovered.