1MIT EECS 6.837Monte-Carlo Ray TracingMIT EECS 6.837Last Time?• Two perspective on antialiasing:– Signal processing– Integration• Supersampling, multisampling• Jittering, Poisson disk• Adaptive supersampling• Monte-Carlo integration• Probabilities: discrete, continuous– Expected value, varianceMIT EECS 6.837Goal today• Prove that Monte-Carlo integration works– Notion of expected value• Prove its convergence speed– Error: notion of variance• For all this, we need to study what happens to expected value and variance when we had tons of random samples• Apply to ray tracingMIT EECS 6.837Today’s lecture• Expected value and variance• Analysis of Monte-Carlo integration• Monte-Carlo in graphics• Importance sampling• Stratified sampling• Global illumination• Advanced Monte-Carlo renderingMIT EECS 6.837A little bit of eye candy for motivation• Glossy material rendering• Random reflection rays around mirror direction– 1 sample per pixelMIT EECS 6.837A little bit of eye candy for motivation• Glossy material rendering• Random reflection rays around mirror direction– 256 sample per pixel2MIT EECS 6.837Back to work• Let us study how expected value and variance behaveMIT EECS 6.837Expected value• Expected value is linear E[f1(x) + a f2(x)] = E[f1(x)] + a E[f2(x)]MIT EECS 6.837Variance• Measure of deviation from expected value• Expected value of square difference (MSE)• Standard deviation σ: square root of variance (notion of error, RMS)MIT EECS 6.837Variance• Proof:• Note that E[x] is a constant. By linearity of E we have:MIT EECS 6.837Non-linearity of variance• Variance is not linear !!!!• σ2[ax]=a2σ2[x]MIT EECS 6.837Non-linearity of variance• Consider two random variable x1and x2• We define the covarianceCov[x1,x2] = E[x1x2] - E[x1] E[x2]σ2[x1+x2] = σ2[x1] + σ2[x2] + 2 Cov[x1,x2]3MIT EECS 6.837Non-linearity of variance, covariance• Consider two random variable x1and x2• We define the covarianceCov[x1,x2] = E[x1x2] - E[x1] E[x2]– Tells how much they are big at the same time– Null if variables are independentσ2[x1+x2] = σ2[x1] + σ2[x2] + 2 Cov[x1,x2]MIT EECS 6.837Recap• Expected value is linear–E[ax1+bx2]=aE[x1]+bE[x2]• Variance is not• For two independent variables– σ2[x1+x2]=σ2[x1]+σ2[x2]– If not independent, needs covariance• σ2[ax]=a2σ2[x]MIT EECS 6.837Questions?• Image from the ARNOLD Renderer by Marcos FajardoMIT EECS 6.837Today’s lecture• Expected value and variance• Analysis of Monte-Carlo integration• Monte-Carlo in graphics• Importance sampling• Stratified sampling• Global illumination• Advanced Monte-Carlo renderingMIT EECS 6.837Monte Carlo integration • Function f(x) of x ∈ [a b] • We want to compute• Consider a random variable x• If x has uniform distribution, I=E[f(x)]– By definition of the expected valueMIT EECS 6.837Sum of Random Variables•Use N independent identically-distributed (IID) variables xi– Share same probability (uniform here)• Define • By linearity of the expectation:E[FN] = E[f(x)]4MIT EECS 6.837Study of variance• Recall σ2[x+y] = σ2[x] + σ2[y] + 2 Cov[x,y]– We have independent variables: Cov[xi, xj]=0 if i ≠ j• σ2[ax] = a2 σ2[x]• i.e. stddev σ (error) decreases by MIT EECS 6.837Example• We know it should be 1.0• In practicewith uniform samples:Nσ2- σ2errorMIT EECS 6.837Monte Carlo integration with proba• Consider N random samples over domain with probability p(x)• Define estimator h I i as:• Probability p allows us to sample the domain more smartlyMIT EECS 6.837Monte-Carlo Recap• Expected value is the integrand– Accurate “on average”• Variance decrease in 1/N– Error decreases in• Good news:– Math are over for today– OK, it’s bad news if you like math (and you should)MIT EECS 6.837Questions?MIT EECS 6.837Advantages of MC Integration• Few restrictions on the integrand– Doesn’t need to be continuous, smooth, ...– Only need to be able to evaluate at a point• Extends to high-dimensional problems– Same convergence • Conceptually straightforward• Efficient for solving at just a few points5MIT EECS 6.837Disadvantages of MC•Noisy• Slow convergence• Good implementation is hard– Debugging code– Debugging maths– Choosing appropriate techniquesMIT EECS 6.837Questions?• Images by Veach and GuibasNaïve sampling strategy Optimal sampling strategyMIT EECS 6.837Today’s lecture• Expected value and variance• Analysis of Monte-Carlo integration• Monte-Carlo in graphics• Importance sampling• Stratified sampling• Global illumination• Advanced Monte-Carlo renderingMIT EECS 6.837What can we integrate?• Pixel: antialiasing• Light sources: Soft shadows• Lens: Depth of field• Time: Motion blur• BRDF: glossy reflection• Hemisphere: indirect lightingMIT EECS 6.837Domains of integration• Pixel, lens (Euclidean 2D domain)•Time (1D)• Hemisphere– Work needed to ensure uniform probability• Light source– Same thing: make sure that the probabilities and the measures are right.MIT EECS 6.837Example: Light source• Integrate over surface or over angle• Be careful to get probabilities and integration measure right!– More in 6.839sourcehemisphereSampling the source uniformly Sampling the hemisphere uniformly6MIT EECS 6.837Questions?• Image by HenrikMIT EECS 6.837Today’s lecture• Expected value and variance• Analysis of Monte-Carlo integration• Monte-Carlo in graphics• Importance sampling• Stratified sampling• Global illumination• Advanced Monte-Carlo renderingMIT EECS 6.837Important issues in MC renderingReduce variance!• Choose a smart probability distribution• Choose smart sampling patternsAnd of course, cheat to make it faster without being noticedMIT EECS 6.837Example: Glossy rendering• Integrate over hemisphere• BRDF times cosine times incoming light)(iωISlide courtesy of Jason LawrenceMIT EECS 6.837Sampling a BRDF5 Samples/Pixeloω)(Uiω)(PiωoωSlide courtesy of Jason LawrenceMIT EECS 6.837Sampling a BRDF25 Samples/Pixel)(Uiωoω)(PiωoωSlide courtesy of Jason Lawrence7MIT EECS 6.837Sampling a BRDF75 Samples/Pixel)(Uiωoω)(PiωoωSlide courtesy of Jason LawrenceMIT EECS 6.837Importance sampling• Choose p wisely to reduce variance– p that resembles f– Does not change convergence rate (still sqrt)– But decreases the constantuniformbad goodMIT EECS
View Full Document