DOC PREVIEW
MIT 16 412J - Cognitive Robotics

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 16.412 Cognitive Robotics Autonomous Visual Tracking Algorithms Alexander Omelchenko May 12, 20042 1. Problem Formulation This paper describes an algorithm for autonomous visual tracking of a moving object. The algorithm is designed for a multi-vehicle surveillance system that includes a UAV and a ground rover. A typical mission scenario of such system is depicted in Figure 1. Visual coverage of the rover from the UAV is done in order to facilitate remote operation of the rover and to facilitate obstacle detection by the rover (as ground rovers, using their forward-looking cameras, are not very good at the detection of obstacles below Figure 1: Mission scenario Camera FOV Ground station Testbed aircraft Images from ground rover Images from aircraft3 ground level such as holes in the ground, and additional visual coverage from an aircraft allows to alleviate detection of such obstacles). At the same time, rover provides visual tracking of different objects on the ground. Both the aircraft and the rover are equipped with pan/tilt/zoom cameras. General approach to the design of the control system is shown in Figure 2. Figure 2: Control algorithm The control algorithm will consist of two parts. Part 1 will generate control signals for the camera’s motors, while part 2 will be doing visual tracking of a moving object in the field of view of the camera. The output of part 2 will consist of values of tptpqqqq&&∆∆∆∆ ,,,, which are error signals of the pan and tilt angles of the camera and their rates of change, which will be used as an input to part 1 of the algorithm. Before solving the objective problem depicted in Figure 1, a simplified version of the problem is considered. The experimental setup used in the simplified version of the problem is shown in Figure 3. Pan/tilt/zoom camera is mounted on the ceiling of a room, while the rover moves on the floor of the room. The problem for the camera is to do visual tracking of the rover. Part 2: Visual tracking algorithm Part 1: Camera motion control algorithm {}tptpqqqq&&∆∆∆∆ ,,,Camera VideooutputMotor control4 Figure 3: Experimental setup of simplified problem Figure 4: Tracking of moving objects from stationary rover Pan/tilt/zoom camera installed on the ceiling of the room CameraFOV5 The experimental setup of Figure 3 is also equivalent to the situation in which visual tracking of a moving object is done from stationary rover, as depicted in Figure 4. Part 1 of the control algrithm is outside of the scope of this class. Therefore this paper is focused on the second part of the control algorithm, i.e. the visual tracking part. 2. Different Types of Visual Tracking As shown in Figure 5, there are two main types of visual tracking: continuous tracking and stepwise tracking. In the continuous tracking mode, camera always follows the object that is being tracked. One possible disadvantage of this mode is that the camera, making frequent adjustments to its orientation, may destabilize video. Figure 5: Visual tracking modes: (a) – stepwise tracking, (b) – continuous tracking Time Time ...(a) (b)6 In the stepwise tracking mode, on the other hand, camera changes its orientation only at discrete moments of time with some period. During the periods of time between rotations, the camera’s field of view stays fixed with respect to background. 3. Visual Tracking Algorithm One: Center of Mass Tracking At the time of the writing of this paper, the experimental setup shown in Figure 3 is not ready yet. However, in order to test visual tracking algorithm, a sequence of images with any moving object is sufficient. Figure 6 shows a sequence of images that has been recorded from avionics computer of the testbed aircraft in the laboratory. The situation modeled in the images, with an object moving toward the camera is more challenging for visual tracking than a situation in which an object would move in a direction perpendicular to the line of sight. Details of the visual tracking algorithm are shown in Figure 7. Images (a) and (b) are two consecutive images. Image (c) is obtained by subtracting image (b) from image (a). Edge detection algorithm is applied to image (c). The result is binary image (d). Center of mass of image (d) is calculated using formulas (3.1) and (3.2). ()∑∑=xyyxIM ,00 ()∑∑=xyyxxIM ,10 ()∑∑=xyyxyIM ,01 (3.1) 0010MMxc= 0001MMyc= (3.2) The result of the algorithm is image (e). Large cross indicates the location of the center of the image, while the small cross indicates the location of the center of mass of the object. Line connecting the two centers represents the error (because the purpose of the camera control algorithm is to keep the observed object in the center of the field of view). The results of the algorithm applied to the test sequence of images are shown in Figure 8 and Figure 9.7 Figure 6: Test sequence of images8 Figure 7: Visual tracking algorithm (a) (b) (c) (d) (e)9 Figure 8: Visual tracking algorithm results10 Figure 9: Visual tracking algorithm results11 The tracking error obtained from the images is interpreted as shown in Figure 10. It has two components, one of which, ∆lp, is parallel to the pan direction of the camera motion, and the other one, ∆lt, is parallel to the tilt direction of the camera motion. The error values for pan and tilt angles of the camera are inferred from ∆lp and ∆lt respectively, using geometrical arguments. Figure 10: Tracking error components The implemented visual tracking algorithm performed very well during tests and was able to do tracking an object moving toward camera, which is a more challenging condition for visual tracking than motion in a direction perpendicular to the line of sight. However, the algorithm works only in cases of stationary background and single moving object in the field of view of camera. Thus, its use is limited only to stepwise type of visual tracking described in Figure 5. This makes the algorithm appropriate for using in the simplified problem depicted in Figure 3. ∆lp ∆lt ∆lCenter of image Center of object12 4. Visual Tracking Algorithm Two: Contour Tracking As in this problem it is necessary to zoom in/out on the object being tracked, tracking


View Full Document

MIT 16 412J - Cognitive Robotics

Documents in this Course
Load more
Download Cognitive Robotics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Cognitive Robotics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Cognitive Robotics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?