Image-based rendering (IBR) may be a new powerful approach to special effects that permits three-dimensional objects and scenes to be visualized in a realistic way without full 3D geometric model reconstruction. Conventional special effects systems use geometry-based rendering (GBR) techniques to render three-dimensional objects or scenes. These techniques produce images from the 3D geometric models of the scene which can include a spread of data like the geometry of scene objects, the position of sunshine sources, optical properties of the surfaces, viewer position, then forth. the most bottleneck of the GBR pipeline is that model generation may be a time-consuming process and is very hooked into the scene complexity. Further, such systems have a limited ability to construct a photo-realistic virtual environment. Thus, the researchers within the field of special effects have recently turned to IBR techniques thanks to many forces like these techniques are computationally less costly, on the brink of photorealism, and their rendering time is typically constant and doesn't depend on the scene complexity. IBR techniques use pre-acquired reference images alongside other parameters like depth maps, positional correspondences, etc. so as to synthesize the arbitrary views of an object or scene. These techniques have many potential applications in domains like computer game, electronic games, sports broadcasting, 3D-Television, film industry, mobile/ handheld devices, etc. the target of this talk is to debate the concept of image-based rendering, fundamental principles behind various IBR techniques, and therefore the strengths & limitations of every technique. Real-time rendering of complex 3D scenes on mobile devices may be a challenging task. the most reason is that mobile devices have limited computational capabilities and are lack of powerful 3D graphics hardware support. During this paper, we propose an Image-Based Rendering (IBR) system for mobile devices to see real-world or synthetic scenes within the network environment. Our system uses a server for computing the specified image segments of pre-captured panoramic video, and transmitting them to the client. After receiving data, the mobile client carries out rendering using simple image warping. The rendering process needs less computational power and is insensitive to the scene’s complexity. A rate-control scheme is meant for efficient use of network bandwidth for handling network congestion. Pre-fetching and cache management also are considered on client and server sides for efficient memory use and reducing transmission requests. With this client-server architecture and native rendering scheme, interactive exploration of the 3D scene on mobile devices becomes possible. Experimental results show that our system is able to do a suitable rendering speed on common mobile devices.
Authoring a 3D scene using IBR representation requires many images. it's impossible for mobile devices to load the whole data set into their memories. Fortunately, small parts of the entire data set are needed for rendering at one viewpoint. Therefore, only the specified parts should be transmitted. Prefetching data which will be utilized in a brief of your time is vital for a practical system. Caches with carefully designed replacement schemes also are important for efficient data management. With pre-fetching and cache management, the performance is often improved since the specified data is usually available in caches. Almost like other network applications, a rate-control scheme is important for reducing the influence of network latency and makes the interaction between each side more efficient. Image-based modeling and rendering techniques have recently received much attention as a strong alternative to traditional geometry-based techniques for image synthesis. Rather than geometric primitives, a set of sample images are won’t to render novel views. Previous work on image-based rendering (IBR) reveals a continuum of image-based representations [22, 15] supported the tradeoff between what percentage input images are needed and the way much is understood about the scene geometry. For didactic purposes, we classify the varied rendering techniques (and their associated representations) into three categories, namely rendering with no geometry, rendering with implicit geometry, and rendering with explicit geometry. These categories, depicted in Figure 1, should actually be viewed as a continuum instead of absolute discrete ones, since there are techniques that defy strict categorization. Using an on-chip MPEG-4 video encoder, the rendering engine generates scene frames consistent with the client’s input. However, the reported frame rate is merely about 2 ∼ 3 fps thanks to the expensive compression scheme. Chim et al. [2] implemented a distributed walkthrough environment that supported the on-demand transmission strategy. Clients render virtual scenes by fetching geometry data from the server. A multi-resolution caching mechanism was employed for reducing the influence of network latency. Although this scheme is extremely useful, it belongs to traditional geometry rendering, and isn't suitable for mobile devices. Engel et al. [4] presented a framework providing remote of 3D applications supported Open Inventor or Cosmo3D. It transmitted compressed images from the server to a java based client. The client sent its events through CORBA requests. For mobile devices, CORBA is dear. Ma et al. [8] developed an end-to-end, low-cost solution for visualizing time-varying volume data rendered on a parallel computer.