2006: Mobile Earth GIS Technology

June. 18th, 2006 23:41 by Stéphane de LucaPermalink | TrackBack: https://stephanedeluca.com/trackback/333 — updated on Nov. 20th, 2018 09:22 exists for 17 years & 9 months ago - .

You certainly already know Google Earth, a system that enables you to see satellite photographs of any location on earth. It proposes you both 2D representation and 3D.
This service is accessible on desktop, but unfortunately, nothing is available for your mobile phone.

Mobile Earth is a technology dedicated to the mobile wireless industry: it’s purpose is to provide mobile industry with an innovative technology that enables users to play in a real time 3D environment; the graphics are true-life representation of cities and landscapes that shows off "infinite" unmatched level of details.

Stéphane de Luca chez In-Fusio le 23 juin 2006

Mobile Earth 3D: First part

I started the project during late April this year, with the objective of providing a streaming technology capable of displaying interactive realtime 3D with unmatched quality for hi-performance 3.5G wireless networks: HSDPA.

A handset user will play a game — and later will use Geographic Information System-based services — that displays very high quality backgrounds based on real world satellite photographs of earth.

The idea behind is to have a server that renders the world in realtime 3D at the location of the avatar of the player. The world is rendered with a very large z-far value — typically more than one hundred kilometers, hence the sensation of extremely large persistent world.

Every single frame generated by the engine is compressed on-the-fly to feed an upstream video.
On the other end, an handset is connected to the server via a TCP connection over a HSDPA radio network, itself connected to the Internet. HSDPA is the 3.5 G network capable of 1Mbps downstream bandwidth. HSDPA deployment are currently starting up in France.

The video is then streamed down the handset, and the video decodes the stream and display the background at the phone resolution — typically QVGA, 320×240 pixels.

As a result, the player sees an incredibly hi-quality graphics.

The player can moves his avatar up, down and right left while moving. This information is send up to the server. Upon receipt, the game engine interprets the player's move and places the camera accordingly, such that the upstream video is altered.

As a result, the player feels that he can freely moves in the background and perform interactions.

Welcome to GIS and its gigantic memory space

Playing such a game requires very high precision in order to see building, houses and even cars on high way for example.
Partnering with IGN, I was provided with a set of very high resolution satellite photographs: four pixels per square meter. At this resolution, we can clearly see cars, but also parasols on sunny on beaches!
Unfortunately, the over side of the coin is the very large size of the related files. For example, the size required to store all photographs for one department is 60 Giga bytes.
I will then need 6 terra bytes to store satellite photographs for France — equivalent to 1257 DVDs!

Stage I : Hi performance realtime rendering

I started off with a LOD-based system for the management of the world.

I set up a server that processes data by implementing a data-flow from the IGN photographs and altitude images to the engine database. 60 Gb is necessary to hold "Alpes Maritimes" original data.
— Altitude: Altitude information is contained in a 38Mb 8-bit indexed TIFF image where the index indicates the level of altitude with a resolution of 50 meters; I slices the image into many 2048×2048 pixels square with a down-sampling algorithm.
— Photographs: the satellite photographs are converted from 2000×2000 pixels RGB TIFF images into 2048×2048 pixels RGB jpeg, from which I generate a set of 10 mipmap images. At the moment, this takes 20+ minutes to be generated on a P4 2GHz, for the small portion I use for the game.

The rendering engine for the moment uses only a small subset of the department information. The LOD system command the tessellation on-demand in order to minimize the required bandwidth.

Each frame is captured from the frame buffer and compressed as jpeg files on the disk, for a second process in charge of the streaming.

Stage II : Video Streaming

The set of generated images are compressed on-the-fly and send down the handset by using a PacketVideo server.
This project has to prove the real performances of HSDPA technology that offers a theoretical bandwidth of 1Mbps! Meanwhile, the application is scalable and will also support UMTS networks.

Stage III : J2me client

The client needs to playback video in a background layer while displaying interactive assets on top of it. It therefore requires a correct handling of alpha blending or punch through at least.
Oddly enough, there is no solution for doing this, at the time of writing; this is why we turned to a different approach, and chose to use a flash player where video playback is supported; the chosen solution is the Bluestreak client.

This solution raised two important issues that impact us: firstly, rendering vector-based graphics is less efficient than simple bitmap, and secondly, the result is less realistic, which deserves the goal of the project to render real-life images.
Whatever, we go for it.

The game displays a space craft that must pass thru 3D big "donuts" bonuses laid out along the chained-cubic spline path.
The player can "freely" move within the virtual tunnel. But in the first version, he cannot go anywhere in the world.

Stage IV : Liberté, liberté chérie

In this stage, the player will be totally free to go anywhere he wants. He shall be able to explore the whole background.
Player's inputs will be uploaded in realtime to the server thru a TCP connection. Upon receipt, the engine will take the necessary action to perform what was required by the player — including moving the camera — hence generating new video frames.
The big issue here is that we have 6 seconds of buffered video on the client, which causes us a 6 seconds latency. For this reason, the client anticipate the background move by applying the 3D transformation on the spacecraft itself with the background still in the previous position.
When frames begins to downstream, the client start to compensate the move of the background by canceling anticipated transformations already applied to the aircraft. For this purpose, I have embedded 3D transformation information as meta data within the graphics frame themselves.

But this is for the next stages… be patient and enjoy the first images and video on the right…



Back