DOC PREVIEW
MIT 6 111 - Study Guide

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Auto-Targetting in a Remote Sentry TurretJacky Chang, Stephanie Paige, Eli StickgoldOctober 31, 2008The initial idea for our project was to create an auto-targetting system for a NerfVulcan, an automatic, belt-fed gun that shoots small foam darts. The weight of the Vulcan,however, makes it somewhat unfeasible to do this without spending a large amount of effortengineering a system to handle the gun’s bulk, so the plan is to do a proof of concept usinga laser pointer and small servo motors. The project is divided up into four parts: the GUI,video processing to produce a pixel that is the ’center of the target’, a calculation moduleto determine the necessary gun position to fire at that target, and the actual movementmodule.A camera mounted below the gun will remain stationary to provide all of the footagenecessary for targetting. The plan is to have a GUI that would allow users to either setthe turret into an automatic shooting mode, where it would shoot at targets when it sensedmovement, or to override this and use the gun manually, directing its movement and pickingtargets using the video feed. The GUI will take the input from the video camera in, displayit, and then output an enable switch, which tells the calculation module whether to takeinput from video processing or from the GUI, a pixel target, which will only be used if theenable switch is appropriately set, and a fire command. One of the ZBTs will be devotedexclusively to this module in order to properly display the video input. The module will needto keep track of the mouse’s movement relative to a predetermined origin and will transmit1its current location when a click is sensed.The second module will deal with the actual video processing when the turret is inautofire mode. It will store four frames of video and compare them, finding changes betweenframes and determining whether or not those changes are large enough to constitute a threatthat should be shot. It will have to decide which shifting sets of pixels constitute a singleobject, pick a single object to target if there are multiple moving objects, and determine thecenter of motion on that target. This module will also take the video data from the camera,using the other ZBT and downsampled images to store multiple frames, and will output apixel to aim the gun at and as well as a fire command.The third module will take the pixel target and determine the necessary rotation ofthe gun to point it at that target. It will take in two pixel-locations, two firing commands,a single switch that tells it which to listen to, the distance to the target from the movementmodule, and a ’ready’ signal from the movement module. When a pixel target is given,the module will send an output to the movement module, specifying the azimuth of thetarget. Once the movement module has indicated the distance to the target, this modulewill calculate the elevation necessary to hit the target and provide that calculation to themovement module. It will then pass through the firing command when the movement modulereports it has reached the provided position. The initial plan is to manually input a distanceto be used in the elevation calculation. If we have time, we will mount a rangefinder underthe laserpointer that feeds data to the calculation module after the movement module hasreached the correct azimuth.The last module will turn the actual gun using the azimuth and elevation provided bythe calculation module and fire the weapon once it is in position. It first takes an azimuthfrom the calculation module and moves the gun to the proper position. It will then takethe distance data, pass it back to the calculation module and wait to receive the elevationdata in response. When positioned correctly, it will send back a ’ready’ signal, at which2point it will receive a fire command and trigger the gun. There will be some margin of errorallowed so that the gun will not constantly be trying to finish tracking a slightly-movingpixel without ever evaluating ’ready’ and being able to


View Full Document

MIT 6 111 - Study Guide

Documents in this Course
Verilog

Verilog

21 pages

Video

Video

28 pages

Bass Hero

Bass Hero

17 pages

Deep 3D

Deep 3D

12 pages

SERPENT

SERPENT

8 pages

Vertex

Vertex

92 pages

Vertex

Vertex

4 pages

Snapshot

Snapshot

15 pages

Memories

Memories

42 pages

Deep3D

Deep3D

60 pages

Design

Design

2 pages

Frogger

Frogger

11 pages

SkiFree

SkiFree

81 pages

Vertex

Vertex

10 pages

EXPRESS

EXPRESS

2 pages

Labyrinth

Labyrinth

81 pages

Load more
Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?