Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33SensEye: A Multi-Tier Camera Sensor Networkby Purushottam Kulkarni, Deepak Ganesan, Prashant Shenoy, and Qifeng LuPresenters: Yen-Chia Chen and Ivan PechenezhskiyEE225B (March 17, 2011)Cameras and Sensor PlatformsSensor platformsCamerasKulkarni et al, In Proc. of ACM NOSSDAV, pages 141–146, 2005.Previous Work•Power Management–Wakeup-on-wireless & Turducken (always-on)•Multimedia Sensor Network–Panoptes (a video-based single-tier sensor network)•Sensor Placement–Solvable optimization problem•Video Surveillance –Techniques for target detection, classification, and tracking–Systems with central control unitMotivation•Applications–Environmental monitoring–Ad-hoc surveillance•Constraints–No human interference–Battery-powered deploymentMulti-Tier Sensor Network•Single-Tier Network vs. Multi-Tier Network –reduces power consumption–achieves similar performance•Benefits:–Low cost–High coverage–High reliability–High functionalitySensEye: Multi-Tier Camera Network•Achieve low latencies without sacrificing energy-efficiency•Tasks: object detection, recognition and tracking•Exploits redundancies in camera coverage (e.g. object localization)General Design Principles•Map each task to the least powerful tier with sufficient resources•Exploit wakeup-on-demand•Exploit redundancy in coverageSystem Design—Object Detection•Performed at the most energy-efficient tier (Tier 1)•Detection via frame differencing•Randomized duty-cycling algorithmSystem Design—Object LocalizationCalculation of the vector along which the centroid of an object liesvSystem Design—Object LocalizationInvolves two rotations and one translationTransformation to the global coordinate frame TriangulationSystem Design—Inter-Tier Wakeup•Localization by tier 1 is used to decide which tier 2 nodes to wake up•Wakeup packet to node 2, similar to wake-on-wireless•Reduce the duration of wakeup: Tier 2 runs at bare minimum when suspendedSystem Design—Recognition and Tracking•Recognition algorithm executed at tier 2•It is assumed any object recognition algorithm can be employed in SensEye•Tracking involves detection, localization, and inter-tier wakeupHardware ArchitectureCamera SensorsSensor PlatformsHardware Architecture•Tier 1: –lower-power camera sensors (Cyclop or CMUcam)–low-power sensor platform (Mote) •Tier 2:–webcams (Logitech)–sensor platform (Intel Stargate), low-power wakeup circuit (Mote)•Tier 3: –high-performance PZT camera and mini-ITX embedded PC (Sony)Hardware ArchitectureSoftware Architecture (Proposed)Software Architecture (Implemented)•CMUcam Frame Differentiator•Mote-Level Detector•Wakeup Mote•High Resolution Object Detection and Recognition•PTZ ControllerCMUcam Frame Differentiator•CMUcam image capture is triggered by Mote-Level Detector•Detection is achieved by differencing with reference background frame (non-zero areas correspond to object)•Two differencing modes: initial image (88x143 or 176x255) is converted to a 8x8 or 16x16 gridMote-Level Detector•Sends initialization commands•Sends sampling signal to CMUcam•Gets the frame difference from CMUcam•Decides whether an event occur •Broadcasts a trigger to the higher tier if an even occur•Sleeps, on no event detection •Duty-cycles CMUcamWakeup Mote•Receives Triggers from the lower tier Motes•Computes the coordinates of the detected object•Decides whether to wakeup StargateHigh Resolution Object Detection and Recognition by Stargate•Frame differencing•Image smoothing•Obtaining an average value of the red, green and blue components of the object •Matching against a library of objectsExperimental Evaluation•Component Benchmarks–Latency and Energy Consumption–Localization Accuracy•SensEye vs. Single-Tier Network–Coverage–Energy Usage–Sensing Reliability–Sensitivity to System ParametersLatency and Energy Consumption•Tier 1: –Cyclope –CMUcam •Tier 2:–webcamLatency and Energy Consumption•Tier 1: –Cyclope –CMUcam •Tier 2:–webcam4 sec 4.7 JLocalization AccuracyExperimental Evaluation: Sensor Placement and Coveragewall 3m x 1.65m• Object appearance time: 7 sec• Interval between appearance: 30 sec• Only one object at any time• 50 object appearances• Tier 1 Motes sampling period: 5 secNetwork Energy Usage~470 J~2900 J (SensEye)(Single Tier)Sensing Reliability•Single-tier system detected 45 out of the 50 objects•SensEye detected 42 (46 with the use of PZT)Sensitivity to System ParametersConclusion•A well-design multi-tier camera sensor network might have significant benefits over a single-tier camera network•General principles for multi-tier sensor network design have been proposed•It has been experimentally demonstrated that a multi-tier network can achieve about an order of magnitude reduction in energy usage without sacrificing reliabilityThank you!Power Management•wake-on-wireless –Separation of the control channel and the data channel–Incoming radio signal to wake up power-off devices•Turducken–Multi-tier structure that uses a lower tier to wake up a higher tierMultimedia Sensor Network•Panoptes–Video-based sensor network–Single-tier, similar to tier 2 in SensEye–Incorporates compression, buffering and filtering (can be used by tier


View Full Document

Berkeley ELENG 225B - A Multi-Tier Camera Sensor Network

Download A Multi-Tier Camera Sensor Network
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Multi-Tier Camera Sensor Network and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Multi-Tier Camera Sensor Network 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?