Penn CIS 400 - Photography based Texturing

Unformatted text preview:

CSE 400/401 Senior Design Project Final Report April 2005 Photography-based Texturing Participant: Darren Tay – [email protected] Advisor: Camillo J. Taylor Abstract: Photography-based 3D-modeling has the potential to produce more detailed and realistic scenes with less human effort. Professor Taylor currently has a system that computes scene geometry from the disparity in surrounding photographs. I intend to extend the project by using the textures from the reference photographs to provide photorealistic views from arbitrary viewpoints. To date I have a working system, written in C. It was used to generate a reasonably photorealistic fly-by animation of Prof. Taylor’s test scene containing several toy blocks. Related Work: Debevec et al. describe the view-dependent texture mapping approach: (where each point on the target appears in two or more reference photographs), each pixel in the new view can be mixed from the corresponding pixels in the reference views in inverse proportion to the angular difference of the new view from the contributing reference views. Said correspondence of pixels is determined by projections of the reference views onto the wire-frames and onto the new view. Thus this approach requires three crucial pieces of information: (1) geometry of the target, (2) reference photographs and (3) camera parameters associated with each reference photograph. There are at least three commercial packages with similar functionality; Realviz ImageModeler, Metacreations Canoma (both inspired by the Façade project) and Photomodeler. All these have limited success with detailed organic shapes because the models have to be human-defined with geometric primitives, but they are well-suited for architectural models. This project is different in that the models are computed from stereo disparity (Prof. Taylor’s existing system) so it may better handle detailed organic shapes of real environments. - 1 -CSE 400/401 Senior Design Project Final Report April 2005 Technical Approach: Here is a block diagram representing the system in its current form. It is necessarily abstract for clarity because the actual control flow does not reduce to a neat block diagram. Fig 1: Main block diagram Interpolate between key camera orientations for fly-by frames (d1)Reference photos Scene descriptionRequested viewpoint (camera parameters and image size) Camera parameters (b) (d2)(a) (c) Find scene intersection point for each pixel. (camera-transform preprocess and ray-casting within the preprocessed subset of triangles) Now we know the point in the world that each pixel sees. (e) For each pixel in new view, transform the seen world-point into the reference images. Each corresponding reference pixel may contribute to the final color of the new pixel. (f) Contribution depends on whether the reference pixel really “sees” the requested part of the scene (occlusion?) and the closeness of view angles. (g) Output bitmap image (h) Assemble bitmap frames into a fly-by animation (i) The previous prototype was admittedly inefficient due to the use of ray-casting in order to associate every pixel with a point in the model space. This is very expensive because for each pixel, a ray cast through the pixel center has to be checked for intersection with every triangle in the scene. Furthermore, ray-casting is done for the requested new view as well as for every reference view. This is because we require information about what each reference pixel “sees” in order to deal with occlusion in step (g). Performance was greatly improved by a preprocessing step that greatly reduces the number of triangles that each ray-cast needs to be checked against. The camera-transform preprocessing performed is as follows: - 2 -CSE 400/401 Senior Design Project Final Report April 2005 Code 1 Each pixel is associated with an initially empty linked list; For each triangle in the scene { Project it onto the image plane; For each pixel in the rectangular area bounding the projected triangle { Add the triangle’s ID to the pixel’s linked list; } } Now the ray-casting step is modified to only check against triangles in the linked-list of each pixel. After ray-casting, the linked-lists are explicitly freed, since they can occupy significant amounts of memory. For each pixel of the new view, among the corresponding reference pixels that see the same point in the scene, weighting their contribution is primarily by the angular closeness of the ray from the reference view and the ray from the new view. To be precise, the dot product of the unit vectors representing the directions is taken. The closer the reference view is to the new view, the closer the dot product is to one. In future, the weighting can be further adjusted using an exponent. Intuitively, weighting by the square of the dot product means an even stronger preference for a reference pixel of a close view angle, and using the square root would do the opposite. In practice, I needed to take (dot product) + 1 for non-negative weights. After determination of weights by closeness of the view angles, I further down-weighted pixels that saw certain model edges. (Down-weighting is done via multiplication by a factor such as 0.1) This was required to tackle a streaking artifact seen in earlier versions. See Figures 2 and 3. Fig 2: Note streaking artifacts on the “floor” - 3 -CSE 400/401 Senior Design Project Final Report April 2005 Fig 3: Diagram showing the cause of the artifacts gf hbareferencecamera c de new camera In a situation like figure 3, reference pixels near point a contribute to the pixels at c and e, while pixels near point b contribute to the pixels at d. The trouble is that pixels at a and b are likely to have color values that are somewhere between that of the near and far objects, (the red circle and blue oval). Thus pixels c, d and e do not get the pure blue or red that they are supposed to get. I handled this and related cases by down-weighting pixels near points a and b. Algorithmically, for each reference pixel, P0, if it has a neighboring pixel Pn, that sees a triangle, TPn, that is a different from the triangle, TP0, that P0 sees, and TPn and TP0 are not neighbors (share at least one vertex) in the scene, then P0 is in an edge situation similar to point a in figure 3. So we should reduce P0’s weight. The effect of this


View Full Document

Penn CIS 400 - Photography based Texturing

Download Photography based Texturing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Photography based Texturing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Photography based Texturing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?