DOC PREVIEW
Vasudevan04

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

JOINTLY OPTIMIZED QUANTIZATION AND TIME DELAY ESTIMATION FOR SENSOR NETWORKS Lavunyu Vasudevan, Antonio Ortegu, and Urbashi Mitra University of Southern California Department of Electrical Engineering Los Angeles, California 90089-2564 ABSTRACT Sensor networks have emerged as a fundamentally new tool for monitoring inaccessible environments. Strict limitations on system bandwidth and sensor energy resources motivate the use of data compression at each sensor. Localization of unknown sources is an key application of sensor networks, requiring as an initial step, estimation of the time delay be- tween signals received at different sensors. In this work, joint designs for quantizer-time delay estimator structures are presented. The goal for these new application-specific encoders/estimators is to achieve the best time delay es- timate at a given bandwidth budget or latency bound, or minimize the rate required to reach an estimate with de- sired accuracy. For white sources, the optimal structure is shown to be a maximum-likelihood detector coupled with a maximum mutual information quantizer. Variations of this system are also considered: sequential detection schemes, empirical methods for unknown signal models, and rate- constrained methods. The proposed designs offer gains over those based on classical compression criteria. 1. INTRODUCTION Sensor networks consist of a large number of low-power nodes cooperating to achieve a sensing goal. Typically, noisy measurements are collected from each sensor, and fused at some central site or node, to estimate an environmental pa- rameter. These sensors are limited in power, memory, com- putational abilities and bandwidth. Computation is in gen- eral cheaper than communication [3], suggesting the use of data compression at the sensors. In this paper, we seek to develop algorithms to support collaboration and target lo- calization in a sensor network; thus we consider time-delay This research has been funded in part by the Integrated Media Sys- tems Center, a National Science Foundation Engineering Research Center, Cooperative Agreement No. EEC-95291 52; the Pratt & Whitney Institute estimation, which is integral to these tasks. Thus, we con- sider systems where quantized data is sent from each of two sensors, and the fusion center estimates the associated time-delay based on received decoded data. Our goal is to design scalar quantization methods at the sensor which maximize the accuracy of the time-delay estimation, rather than simply reproduce the sensor data with some fidelity, e.g., a desired MSE level. We first cast time-delay estima- tion as a discrete, multi-hypothesis testing problem, develop the minimum probability of time delay estimator for our system model, and then design a novel scalar quantization scheme that optimizes the probability of error for this detec- tor. We provide designs based on exact knowledge of source and noise statistics, and show that our processing-aware de- sign outperforms standard detection-quantization schemes by achieving the best time delay estimate at a given latency bound, or minimizing the rate required to reach a required accuracy. We also present empirical techniques to train the quantizers and detectors on real data samples, and entropy- constrained approaches to operate within a rate budget. Prior work [4, 5, 61 on hypothesis testing with quan- tized data has focused on binary decisions in a Neyman- Pearson framework. This problem set-up does not translate directly to our multiple hypothesis, time-delay estimation problem. Since exact calculation of probability of error is generally intractable, these papers focus instead on optimiz- ing asymptotic distance measures between the distributions under each of the two hypotheses (e.g. Kullback-Leibler distance, Chernoff bound). In more recent work [7, 81, the asymptotic bound on classification (multi-hypothesis test- ing) error probability is shown to be related to the small- est painvise distributional distance. In our work, we ex- plicitly derive a quantizer based on the probability of error. Interestingly, for our particular model, since all incorrect hypotheses are identically distributed, the resulting optimal quantizer maximizes the relative entropy (a distributional for Collaborative Engineering (PWICE) at USC; and the National Science distance) between correct and incorrect hvDotheses. d, Standard scalar quantization seeks to encode the data from a source, characterized by its probability density func- Foundation Small ITR, CCR-03 13392. Any opinions, findings and con- clusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foun- dation. Early versions of this work were published in [ 1, 21. tion (pa!!, with the lowest possible rate, and the smallest 0-7803-8379-6/04/$20.00 02004 IEEE. 203average distortion. The most common distortion measure is the mean squared error (MSE) between quantized and un- quantized data [9]. Non-MSE based quantization for cross- correlation detection includes the study in [ 101 of transform based quantization to optimize the Cramer-Rao bound for time-delay estimation. The transform based method comes at a cost of an increase in complexity and processing delays; furthermore, the correlator output depends on the reproduc- tion levels of the quantizer, whereas our minimum probabil- ity of error detector is reproduction level independent. Fur- ther, instead of optimizing on a bound (CRB), which may or may not be achieved by a practical estimator, our novel quantizer directly optimizes the probability of error for the time-delay estimation task. In earlier work [ 111, we have presented quantizer designs for cross-correlation that mini- mize the squared error between quantized and unquantized correlation functions. Again, this is only an approximation to the probability of error, which should be (and is, in our new approach) the criterion for optimization. 2. QUANTIZER AND DETECTOR DESIGN 2.1. Problem Formulation In this work, we study the special case of the two-sensor time-delay estimation subsystem. With multiple sensors, delay estimates can be obtained for pairs of sensors, and triangulated on sensor locations to determine the source po- sition. Consider two sensors, capturing delayed and noisy versions of the same discrete signal, z(.): z1(m) = .(m) +


Vasudevan04

Download Vasudevan04
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Vasudevan04 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Vasudevan04 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?