High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo MitsunagaOverviewSlide 3High Dynamic Range Imaging: The IdeaCombining Information from Over Exposure and Under ExposureMotivation: Why do we care?Methods: How to Extract HDRI infoSlide 8Multiple Sensor Elements in Each PixelAdaptive Pixel ExposureRelated Work: Where it StartedRelated Work: Sequential ExposuresRelated Work: Hardware SolutionsSpatially Varying Pixel ExposureHow Does this Increase the DR?How Many Grays? (846)Spatial Resolution ReductionImage Reconstruction by AggregationImage Reconstruction by InterpolationSolving for Offgrid Values by the Interpolation KernelExperimental Results - SimulationResultsFuture WorkHigh Dynamic Range Imaging:Spatially Varying Pixel ExposuresShree K. Nayar, Tomoo MitsunagaCPSC 643 Presentation # 2Brien FlewellingMarch 4th, 2009OverviewHDR ImagingProblemMotivationMethodsRelated WorkWhere it StartedSequential ImagesMultiple Detectors, Adaptive Pixel ElementsOverviewHDR Imaging using Spatially Varying Pixel ExposureThe methodImage AquisitionImage ReconstructionExperimental ResultsConclusions and Future WorkHigh Dynamic Range Imaging: The IdeaPerceptible intensity values span a range far greater than can be sampled by a single image.Using Various Techniques, Estimate the camera response function in order to accurately allocate bits in the grayscale to energy levels in the scene.Combining Information from Over Exposure and Under ExposureConsider the projection of the illumination in a scene to be a function of energy rates.Bright/Darker Regions have a higher probability of being over/under exposed for an arbitrary snapshot.It is the combination of various sampling techniques which allow us to display these regions together.Motivation: Why do we care?High Dynamic Range images result in scene representations much more like what is seen by the human eye.Artistic PurposesVisual methods need good “landmarks” if they exist in over/under exposed regions, this can be problematic.In tracking, a region could be over exposed ore under exposed frame to frame.Methods: How to Extract HDRI infoSequential Exposures:Multiple Images at Various Shutter speeds or Iris SettingsSolve a subset of pixel correspondences as an array of linear systemsSolve for the camera response functionMap the results to the imageMethods: How to Extract HDRI infoMultiple Image DetectorsUse optical elements to generate mutiple images sampled by different imagersThe images may have varying sensitivities, resolution, or exposure times.More Expensive but can handle moving objects better.Multiple Sensor Elements in Each PixelReduces Resolution by a factor of 2Simple Combination of neighboring elements with different potential well depths.Overall a disregarded approach since the sensor cost is greater and performance gain is not very high.Adaptive Pixel ExposureVary the pixels sensitivity as a function of the amount of time for its potential well to fill.Feedback SystemAn Interesting and Promising Approach but..Expensive for large scale chip designsVery sensitive to motion or blur effects in low light scenesRelated Work: Where it Started[Blackwell, 1946] H. R. Blackwell. Contrast thresholds of the human eye. Journal of the Optical Society of America, 36:624–643, 1946.Blackwell Studies the variations in perceptible illumination that the human eye detects in a scene.Many patents on HDR CCD sensors in the 1980’sSequential Methods for HDR Image GenerationEarly 1990’sRelated Work: Sequential Exposures[Azuma and Morimura, 1996], [Saito,1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda,1998], [Takahashi et al., 1997], [Burt and Kolczynski,1993], [Madden, 1993] [Tsai, 1994]. [Mann and Picard,1995], [Debevec and Malik, 1997] and [Mitsunagaand Nayar, 1999]The final paper extends the estimation to include the radiometric response function of the cameraRelated Work: Hardware SolutionsMultiple Imagers[Doi etal., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998],[Ikeda, 1998]Adaptive Pixel Elements[Street, 1998], [Handy, 1986], [Wen, 1989], [Hamazaki, 1996], [Murakoshi, 1994] and [Konishi et al.,1995][Brajovic and Kanade, 1996].Spatially Varying Pixel ExposureThe SVE (Spatially Varying Exposure Image.Let a 2x2 array of pixels be subject to exposures e0,e1,e2,e3Let this array be repeated in a mask for the entire imageHow Does this Increase the DR?How Many Grays? (846) K = # of exposure levels : 4q = # of quantization levels per pixel: 256R = Round off functionek = exposure levelSpatial Resolution ReductionNot a reduction by a factor of 2!Low exposure level pixels could be noise dominated for dim regionsHigh exposure level pixels could be saturated in bright regions.In general the spatial resolution is not significantly reduced.Image Reconstruction by AggregationSimple AveragingConvolution with a 2x2 box filterResults in a piecewise linear function which is like a gamma function with gamma > 1Overall produces good HDR results except at sharp edgesImage Reconstruction by InterpolationIf sharp features are important, the image brightness value M(i,j) are scaled by their exposures to produce M’(i,j).Remove all underexposed, and saturated pixelsDetermine the ‘Off-grid’ points from the undiscarded ‘On-grid’ points by interpolation.The above equation is the cubic interpolation kernel which is used in the least squares estimation for the off grid pointsSolving for Offgrid Values by the Interpolation KernelM: 16x1 on-grid brightness valuesF: 16x49 cubic interpolation elementsMo: 16x1 off-grid brightness valuesExperimental Results - SimulationResultsFuture WorkPrototype was still being developedSimulation proved useful in the estimation of the nonlinear response function, can it be used to estimate properties of scene objects?Can this be used to estimate/handle motion blur for moving objects?What is an optimal pattern for variation of pixel
View Full Document