Vision Based Control MotionRobotsA “Seeing Robot”Common reasons for failure of vision systemsRobustnessEnhanced TechniquesHough TransformHough Transform (con’t)Hough Transform (cont’d)Slide 10Robust color classificationRobust color classification (con’t)Model-based handling of occlusionTracking system modelPredictionModel-based handling of occlusion (con’t)Object tracking with visibility determinationMultisensory ServoingVision Controlled Robot ModelConclusionsVision Based Control Vision Based Control MotionMotionMatt BakerMatt BakerKevin VanDykeKevin VanDykeRobotsRobotsToday’s robots perform complex tasks with Today’s robots perform complex tasks with amazing precision and speedamazing precision and speedWhy then have they not moved from the Why then have they not moved from the structure of the factory floor into the “real” structure of the factory floor into the “real” world? What is the limiting factor?world? What is the limiting factor?VisionA “Seeing Robot”A “Seeing Robot”A robot that can perceive A robot that can perceive and react in complex and and react in complex and unpredictable surroundingsunpredictable surroundingsThis is not possible with the This is not possible with the marker-based systems in marker-based systems in use in most laboratory use in most laboratory vision-based control vision-based control systemssystemsCommon reasons for failure of Common reasons for failure of vision systemsvision systemsSmall changes in the environment can Small changes in the environment can result in significant variations in image result in significant variations in image datadataChanges in contrastChanges in contrastUnexpected occlusion of featuresUnexpected occlusion of featuresRobustnessRobustnessStable measurements of Stable measurements of local feature attributes, local feature attributes, despite significant despite significant changes in the image changes in the image data, that result from data, that result from small changes in the 3D small changes in the 3D environment environment [1].[1].Enhanced TechniquesEnhanced TechniquesThe Hough-TransformThe Hough-TransformRobust color classificationRobust color classificationOcclusion predictionOcclusion predictionMultisensory visual servoingMultisensory visual servoingHough TransformHough TransformUsed to extract geometrical object features from Used to extract geometrical object features from digital imagesdigital imagesHough Transform (con’t)Hough Transform (con’t)Features are extracted by detecting Features are extracted by detecting maximums in the imagemaximums in the imageExample geometric features encountered: Example geometric features encountered: Lines:Circles:Ellipses:Hough Transform (cont’d)Hough Transform (cont’d)AdvantagesAdvantagesNoise and background clutter do not impair Noise and background clutter do not impair detection of local maximadetection of local maximaPartial occlusion and varying contrast are Partial occlusion and varying contrast are minimizedminimizedNegativesNegativesRequires time and space storage that Requires time and space storage that increases exponentially with the dimensions increases exponentially with the dimensions of the parameter spaceof the parameter spaceHough Transform (con’t)Hough Transform (con’t)a real-time application of HT requires both a fast a real-time application of HT requires both a fast image preprocessing step and an efficient image preprocessing step and an efficient implementationimplementationImplementation of a circle tracking algorithm based on HTRobust color classificationRobust color classificationColor has high disambiguity powerColor has high disambiguity powerReal-time is requiredReal-time is requiredSupervised color segmentationSupervised color segmentationThe color distribution of the current scene is The color distribution of the current scene is analyzed and colors that do not appear in the analyzed and colors that do not appear in the scene are used as marker colorsscene are used as marker colorsThese markers are then used as the input to the These markers are then used as the input to the visual servoing systemvisual servoing systemColors represented by their hue-saturation value Colors represented by their hue-saturation value (H&S relate to color, V relates to brightness)(H&S relate to color, V relates to brightness)Robust color classification (con’t)Robust color classification (con’t)Color segmentationColor segmentationChoose four colors as Choose four colors as marker colorsmarker colorsColor markers brought Color markers brought onto object we wish to onto object we wish to tracktrack markers outlinedmarkers outlinedColor distribution Color distribution computedcomputedInitial segmentationInitial segmentationModel-based handling of occlusionModel-based handling of occlusionThe previous two techniques take care of The previous two techniques take care of bad illumination and partial occlusionbad illumination and partial occlusionWhat about aspect changes (complete What about aspect changes (complete occlusion)?occlusion)?Build and maintain a 3D model of the Build and maintain a 3D model of the observed objects so they can be tracked observed objects so they can be tracked despite occlusion despite occlusion Then use predictionThen use predictionTracking system modelTracking system modelSensor dataFeature extraction3D pose estimationRobot controlPose predictionVisibility determinationFeature selectionGeometric modelDesigned to handle aspect changes onlinePredictionPredictionExtract measurements of object features based on raw Extract measurements of object features based on raw sensor datasensor dataEstimate the spatial position and orientation of the target Estimate the spatial position and orientation of the target objectobjectBased on history of estimated poses and assumptions Based on history of estimated poses and assumptions about the object motion you can predict an object pose about the object motion you can predict an object pose expected in next sampling intervalexpected in next sampling intervalWith predicted pose and 3D model we are able to With predicted pose and 3D model we are able to determine feature visibility in advancedetermine feature visibility in advanceGuide the feature extraction process for the next frame Guide the feature extraction process
View Full Document