DOC PREVIEW
TAMU CSCE 643 - Global Correlation Based Ground Plane Estimation Using

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Global Correlation Based Ground Plane Estimation UsingV-Disparity Image∗Jun Zhao, Jayantha Katupitiya and James WardARC Centre of Excellence for Autonomous SystemsSchool of Mechanical and Manufacturing EngineeringThe University of New South Wales, Sydney, NSW 2052, [email protected]— This paper presents the estimation of the positionof the ground plane for navigation of on-road or off-roadvehicles, in particular for obstacle detection using stereovision. Ground plane estimation plays an important rolein stereo vision based obstacle detection tasks. V-disparityimage is widely used for ground plane estimation. However,it heavily relies on distinct road features which may not exist.In here, we introduce a global correlation method to extractthe position of the ground plane in V-disparity image evenwithout distinct road features.Index Terms— V-disparity image, correlation, stereo vision.I. INTRODUCTIONStereo vision is one of the key components in vision-based robot navigation. Today mobile robotics researchersfocus on developing navigation in unknown environmentswhere a robot requires to respond to changes in theenvironment in real-time. Although laser sensors providerefined and easy-to-use information about the surroundingarea, they also present some intrinsic limitations to theirfunctioning as mentioned in [1].Because a vision system provides a large amount of data,extracting refined information sometimes may be complex.In obstacle detection tasks, the purpose is to distinguishthe obstacle pixels from the ground pixels in the depthmap. Se and Brady [2] quote from Gibson’s ”ground theoryhypothesis”(1950): ”there is literally no such thing as aperception of space without the perception of a continuousbackground surface”. In this study, we assume that theground can be locally represented by a plane [3].In built environments such as in-door environment, theposition of stereo rig relative to the ground is normallyfixed, thus the disparity of ground pixels in the disparitymap can be determined during calibration stage [4]. How-ever in outdoor environment, The pitch angle between thecameras and the road surface will change due to static anddynamic factors [5], thus the disparity of ground pixels ischanging from time to time. Therefore, we need to computethe pitch angle and disparity of ground pixels dynamically.In [5], the authors used four sensors mounted between thechassis and wheels to compute the pitch angle.∗This work is supported in part by the ARC Centre of Excellenceprogramme, funded by the Australian Research Council (ARC) and theNew South Wales State Government.”Plane fitting” is a traditional method for ground es-timation and used by different researchers. In [6] , theauthors used RANSAC Plane Fitting to find the disparityof ground pixels. In [7], pixels (u, v) with a valid value inthe depth map are labeled as belonging to the ground planeif the following constraint is satisfied: d (u, v) ≤ au + bv +c + r(d). In [8], the authors developed a road detectionalgorithm utilizing road features called plane fitting errors.Recently, V-disparity image has become popular forground plane estimation [1][9][10][11]. In this image, theabscissa axis (w) plots the offset for which the correlationhas been computed; the ordinates axis (v) plots the imagerow number; the intensity value is settled proportional tothe measured correlation, or the number of pixels havingthe corresponding disparity (w) in a certain row (v). Eachplanar surface in the field of view is mapped in a segmentin the V-disparity image [10]. Vertical surfaces in the 3Dworld are mapped into vertical line segments, while groundplane in the 3D world are mapped into slanted line segment.This line segment, called ground correlation line in thisstudy, contains the information about the cameras pitchangle at the time of acquisition (mixed with the terrainslope information).Both plane fitting and v-disparity image rely on distinctroad features such as lane markings. Without these fea-tures, there would not be sufficient ground pixels in thesparse disparity map from which the ground plane can beextracted. In this paper, we first analyze the behavior ofground correlation line in different camera pitch angles inV-disparity image. This is introduced in section II. Basedon the behavior, in Section III, we introduce a globalcorrelation method to extract the ground correlation line.In Section IV, we show some experimental results usingimage pairs without distinct road features. In Section V,we draw some conclusions.II. GR OUND CORRELATION LINE INV-DISPARITY IMAGE WHEN CHANGING TILTANGLEA. Camera placement and geometryTo obtain a real world representation from an image pair,it is necessary to know the cameras placement at the timeof acquisition. Consider cameras on an autonomous vehiclethat are tilted down an angle θ, as shown in Fig. 1. In this2007 IEEE International Conference onRobotics and AutomationRoma, Italy, 10-14 April 2007WeB5.51-4244-0602-1/07/$20.00 ©2007 IEEE. 529Authorized licensed use limited to: IEEE Xplore. Downloaded on March 16, 2009 at 14:03 from IEEE Xplore. Restrictions apply.Fig. 1. Camera placementFig. 2. Conditions for the ground correlation lines to be parallel: a isfixed, all ground planes pass through point Gfigure, a camera centered coordinate system, xyz, definesthe positions of points in the physical world in front ofthe cameras. If the cameras are at a distance h above theground, the ground plane can be represented asz =hsin θ− ycos θsin θ(1)An image coordinate system, uvw, defines the spatialpositions of data points on the image plane(uv) and therelative disparity of corresponding points between the leftand right images(w). We adopt the pin-hole camera model,then the relation between the world coordinates of a pointP (x, y, z) and the coordinates on the image plane (u, v) inthe camera isu = xfz, v = yfz, w = bfz(2)where f is the focal distance of the lens and the stereobaseline is b.From Eq. 1 and 2, we can get that in image planev =hwb cos θ− f tan θ (3)Fig. 3. Pitch variationFig. 4. Slope of ground correlation line during pitch variation. The solidslanted lines represent ground correlation lines. The dashed lines representthe next distinguishable gradient near lsIn this equation, b and f are constants dependent onthe camera geometry and spacing between cameras. In thisstudy, we also assume that the camera height h is alsofixed, that is, there only


View Full Document

TAMU CSCE 643 - Global Correlation Based Ground Plane Estimation Using

Download Global Correlation Based Ground Plane Estimation Using
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Global Correlation Based Ground Plane Estimation Using and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Global Correlation Based Ground Plane Estimation Using 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?