Robert Speaker, Raymond Wu, Bo Zhu 6.111 Final Project PROJECT PROPOSAL We propose to develop a digital system capable of performing security functions – primarily surveillance. Many different situations require automated monitoring systems capable of not only tracking, but also capturing high quality surveillance data. Our system will use audio and visual information to detect the source of intrusion. The source is identified by either movement (visual tracking) or audio. A webcam and microphones will be installed for these purposes. The system will also be able to zoom in on the source on an area it deems important. For example, if the source is identified as a human, the camera will zoom in on the face and take a snapshot. Furthermore, a projectile device can be mounted to fire at the source. For audio detection, small amplitude noise is filtered out and three microphones placed on different sides of the camera will triangulate the source (if the source is behind the camera, then only audio information will help the camera locate the source). Video processing will use interframe macroblock comparison and high frequency filtering to detect edges of moving objects. This information will be passed onto controlling camera motion and other outputs. Also, Camera motion tracking information will be sent to a video display for testing and demonstration purposes. There will be several subsystem modules in the device design. The Audio Processing module takes in audio signals from three equidistant microphones, and outputs the direction of a relevant sound source in terms of angle degrees in the xy-plane (parallel to the floor). At a ten degree resolution, we are expecting to have 36 possible directions, resulting in a 6-bit output. The Video Processing Module will obtain YUV inputs from a webcam and process that information to output the direction of the MIA (most important area) in terms of angle degrees in the plane of the camera’s images. The angular information output from the Audio Processor and the Video Processor goes to the Master Controller, whose job is to control camera position and operation, gun position and operation, display to an external monitor, and interactions to memory. Camera position is changed by a Camera Motor on which the camera rests.Robert Speaker, Raymond Wu, Bo Zhu 6.111 Final Project PROJECT PROPOSAL Similarly, gun position is changed by a Gun Motor on which the gun rests. The Display Module prepares the image to be output to a monitor, and the type of image it sends (the live feed from the camera or a snapshot, both Display Module inputs) depends on the input from the Master Controller. There is a Picture Memory Module which retains snapshots from the camera, and is controlled by the Master Controller. It outputs information to the Master Controller as well as the Display Module. For debugging purposes, there are manual override controls for the camera and gun motion and operation by debounced keyboard signals. Bo will work on audio detection, specifically the Audio Processing module. Ray will work on motion detection and the motion algorithms associated with that process, specifically the Video Processing Module. Bobby will work on tying all the other pieces together, specifically the Master Controller Module, Picture Memory Module, Display Module, and Debounce Module. He will also work with the physical components of the device, such as the camera, camera motor, gun, and gun motor.Master ControllerCamera_UpCamera_DownCamera_RtCamera_LtZoomSnapshotManual_OverideDebounceD_Camera_UpD_Camera_RtD_Camera_LtD_Camera_DownD_ZoomD_SnapshotD_Manual_OverideCameraMotorLook_RightShootD_ShootLook_LeftLook_UpLook_DownAudio ProcessorAC97Mic1 Mic2 Mic3Audio_in1Audio_in2Audio_in3Angle_A[5:0]ResetD_ResetCameraYUV_INTake_PictureZoom_InPicture memoryVideo ProcessorRetrieveAngle_V[5:0]Pic[x:0]GunFireUp_GDown_GSafetyD_SafetyDisplayModuleLCDVideo_OutDisplay_TypeGunMotorBo Zhu, Ray Wu, and Robert Speaker 6.111 Final Project Block DiagramSentry Gun and Security
View Full Document