DOC PREVIEW
Single-Image Shadow Detection and Removal using Paired Regions

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Single-Image Shadow Detection and Removal using Paired RegionsRuiqi Guo Qieyun Dai Derek HoiemUniversity of Illinois at Urbana Champaign{guo29,dai9,dhoiem}@illinois.eduAbstractIn this paper, we address the problem of shadow detec-tion and removal from single images of natural scenes. Dif-ferent from traditional methods that explore pixel or edgeinformation, we employ a region based approach. In addi-tion to considering individual regions separately, we predictrelative illumination conditions between segmented regionsfrom their appearances and perform pairwise classificationbased on such information. Classification results are usedto build a graph of segments, and graph-cut is used to solvethe labeling of shadow and non-shadow regions. Detectionresults are later refined by image matting, and the shadowfree image is recovered by relighting each pixel based onour lighting model. We evaluate our method on the shadowdetection dataset in [19]. In addition, we created a newdataset with shadow-free ground truth images, which pro-vides a quantitative basis for evaluating shadow removal.1. IntroductionShadows, created wherever an object obscures the lightsource, are an ever-present aspect of our visual experience.Shadows can either aid or confound scene interpretation,depending on whether we model the shadows or ignorethem. If we can detect shadows, we can better localize ob-jects, infer object shape, and determine where objects con-tact the ground. Detected shadows also provide cues forlighting direction [10] and scene geometry. On the otherhand, if we ignore shadows, spurious edges on the bound-aries of shadows and confusion between albedo and shadingcan lead to mistakes in visual processing. For these reasons,shadow detection has long been considered a crucial com-ponent of scene interpretation (e.g., [17, 2]). But despite itsimportance and long tradition, shadow detection remains anextremely challenging problem, particularly from a singleimage.The main difficulty is due to the complex interactions ofgeometry, albedo, and illumination. Locally, we cannot tellif a surface is dark due to shading or albedo, as illustratedin Figure 1. To determine if a region is in shadow, we mustcompare the region to others that have the same material andorientation. For this reason, most research focuses on mod-Figure 1. What is in shadow? Local region appearance can be am-biguous, to find shadows, we must compare surfaces of the samematerial.eling the differences in color, intensity, and texture of neigh-boring pixels or regions. Many approaches are motivated byphysical models of illumination and color [12, 15, 16, 7, 5].For example, Finlayson et al. [7] compare edges in the orig-inal RGB image to edges found in an illuminant-invariantimage. This method can work quite well with high-qualityimages and calibrated sensors, but often performs poorlyfor typical web-quality consumer photographs [11]. To im-prove robustness, others have recently taken a more em-pirical, data-driven approach, learning to detect shadowsbased on training images. In monochromatic images, Zhuet al. [19] classify regions based on statistics of intensity,gradient, and texture, computed over local neighborhoods,and refine shadow labels using a conditional random field(CRF). Lalonde et al. [11] find shadow boundaries by com-paring the color and texture of neighboring regions and em-ploying a CRF to encourage boundary continuity.Our goal is to detect shadows and remove them from theimage. To determine whether a particular region is shad-owed, we compare it to other regions in the image thatare likely to be of the same material. To start, we findpairs of regions that are likely to correspond to the samematerial and determine whether they have the same illu-mination conditions. We incorporate these pairwise rela-tionships, together with region-based appearance features,1Figure 2. Illustration of our framework. First column: the originalimage with shadow, ground truth shadow mask, ground truth im-age; Second column, hard shadow map generated by our detectionmethod and recovered image using this map alone. Note that thereare strong boundary effects in the recovered image. Third column,soft shadow map computed using soft matting and recovery resultusing this map.in a shadow/non-shadow graph. The node potentials in ourgraph encode region appearance; a sparse set of edge poten-tials indicate whether two regions from the same surface arelikely to be of the same or different illumination. Finally,the regions are jointly classified as shadow/non-shadow us-ing graph-cut inference. Like Zhu et al. [19] and Lalonde etal. [11], we take a data-driven approach, learning our clas-sifiers from training data, which leads to good performanceon consumer-quality photographs. Unlike others, we ex-plicitly model the material and illumination relationships ofpairs of regions, including non-adjacent pairs. By modelinglong-range interactions, we hope to better detect soft shad-ows, which can be difficult to detect locally. By restrictingcomparisons to regions with the same material, we aim toimprove robustness in complex scenes, where material andshadow boundaries may coincide.Our shadow detection provides binary pixel labels, butshadows are not truly binary. Illumination often changesgradually across shadow boundaries. We also want to esti-mate a soft mask of shadow coefficients, which indicate thedarkness of the shadow, and to recover a shadow-free im-age that depicts the scene under uniform illumination. Themost popular approach in shadow removal is proposed ina series of papers by Finlayson and colleagues, where theytreat shadow removal as an reintegration problem based ondetected shadow edges [6, 9, 8]. Our region-based shadowdetection enables us to pose shadow removal as a mattingproblem, similarly to Wu et al. [18]. However, the methodof Wu et al. [18] depends on user input of shadow and non-shadow regions, while we automatically detect and removeshadows in a unified framework (Figure 2). Specifically, af-ter detecting shadows, we apply matting technique of Levinet al. [13], treating shadow pixels as foreground and non-shadow pixels as background. Using the recovered shadowcoefficients, we calculate the ratio between direct light andenvironment light and generate the recovered image by re-lighting each pixel with both direct light and environmentlight.To evaluate our shadow detection and removal, we pro-pose a new dataset with 108 natural scenes, in which


Single-Image Shadow Detection and Removal using Paired Regions

Download Single-Image Shadow Detection and Removal using Paired Regions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Single-Image Shadow Detection and Removal using Paired Regions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Single-Image Shadow Detection and Removal using Paired Regions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?