DOC PREVIEW
Columbia CSEE 4840 - Passive Sonar

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

 Project DesignEric ChangMike IlardiJess KaneshiroJonathan SteinerIntroductionIn developing the Passive Sonar, our group intendes to incorporate lessons from bothEmbedded Systems and E:4986, the latter of which three of our members are currently enrolledin. Our device works by triangulating the location of a sound source along a horizontal axis bycalculating the difference between the arrival times in two spaced microphones. A markerprojected onto a screen will represent calculated location of the sound source.Trigonometric calculations in the form of ROM-encoded lookup tables will be used tofind the location based upon the difference in the arrival times of the peak of the sound wave forany given period of time. Due to complications involved in calculating the vertical position of asound source, we will be only able to find the position along the horizontal axis.Abstract Our basic setup is shown on Figure 1. Two microphones are spaced a distance x apart. Asound source is located on a plane a distance l perpendicular to the plane in which themicrophones are situated. A projection screen is located equidistant between the microphones.d1 and d2 represent the distance from the sound source to the respective microphones. By meansof the trigonometric calculations outlined on the attached diagram, we can triangulate thehorizontal position of the sound source. Process FlowFigure 2 contains a block diagram outlining at an abstract level the logical flow of ourPassive Sonar.Stage 1:Analog sound signals enter the left and right microphone inputs on the audio codec,labeled A on the block diagram. Serial digital audio samples corresponding to the left and rightanalog inputs feed out of two digital outputs on the codec and into separate 20-bit shift registers,labeled B and C on the block diagram. Configuration pins on the codec are connected to themicroblaze processor to allow for easy configuration through a software interface.Stage 2: Data from the shift registers are read into the peak detectors, labeled F and G, once anentire 20 bit word has been loaded into the shift registers. The peak detectors operate by firstcomparing the sample with noise constant stored in the register labeled I on the diagram.Register I is configurable through the microblaze. If the sample is above the noise constant thenthe peak detector is activated for a constant period of time (to be determined by the length of anormal transient). All subsequent samples taken during this period of activation are comparedagainst the previous greatest value and the time of occurrence based upon counter H’s count arestored in internal registers. At the end of the transient sample period the time of the greatest valueis output to the subtractor (D in diagram) to await the signal from the inactivated peak detector.While the subtractor is ‘waiting’, that peak detector is disabled, pending peak detection of thesecond signal.Stage 3:The inactivated peak detector (F or G) is now awaiting the signal to break the noisethreshold. The second peak detector functions exactly like the first one, storing and tagging themaximum sample value. When the second peak detector reaches the end of the transientdetection period the value is sent to the subtractor.Note: the peak detectors send an extra bit (notated +1 on our diagram) along with the time to tellthe subtractor whether or not it is a new word. The subtractor will only output the differencewhen two new words have been received.Stage 4:The subtractor (D) receives the first sample and holds it until it receives the secondsample. The extra bit coming from the peak detector governs whether the subtractor is waiting orsubtracting (to send a new difference).Stage 5:Our time difference is then sent to our lookup table (J). The lookup table will be filledwith an array based on the trigonometry we did in Figure 1. The time difference will representthe index of the LUT which will output information to the microblaze (K) representing thehorizontal position of the sound source.Stage 6:The microblaze has final control over the VGA framebuffer. It arbitrates between theoutput of diagnostic data obtained from the registers and the visualization of the sound source. Inaddition, the registers, including those in the codec, will be configurable through a softwareinterface controlled by the microblaze.Stage 7: Video output from the framebuffer appears on a display.Project design issues1) We are placing the microphones a set distance apart from each other and placing thatvalue into a register vs. making the distance variable. This is because leaving the distanceas a variable would involve leaving all the trigonometry in terms of that variable. Wewould then need to build a multiplier in VHDL for the LUTs, which would unnecessarilycomplicate the project. 2) We briefly considered designating one mic as the dominant mic, meaning it would alwaysreceive the signal before the other mic. This would simplify the code and eliminate theneed for some left/right variables, loops, if statements, et cetera. But a dominant micwould limit the range of direction that the sound could come from; it would have to comefrom closer to the dominant mic than the other mic. 3) Initially we had the system test for sound by comparing the signal against a noise floor, orthreshold. Then whichever comparator that received the signal second would test itagainst the same floor. There was problem of finding the range to apply to the signalreceived by the second mic, as the same noise heard by the second mic will not be exactlythe same as the noise heard by the first mic. 4) Now we are continuously sampling the signal once it passes a noise threshold for thelargest amplitude within that sample. This peak detection will be much better able toaccurately detect the delay between the left and right mics. 5) Much like the human hearing system, if the system receives a signal of constantamplitude, it will be confused because the system detects the sound by peaks in the sinwave. It will detect the initial attack and then the mics will pick up the peaks in thesignal, but it will not be able to distinguish which period the peak is from. So the left andright mics will read the same signal, but at different periods, instead of the same signal’speak at one period. The system will actually be reading the phase difference of theshifted sin wave (the wave received by the second mic)


View Full Document

Columbia CSEE 4840 - Passive Sonar

Documents in this Course
SPYCAM

SPYCAM

91 pages

PAC-XON

PAC-XON

105 pages

lab 1

lab 1

6 pages

memory

memory

3 pages

Structure

Structure

12 pages

Video

Video

3 pages

pacman

pacman

4 pages

Lab 1

Lab 1

6 pages

Scorched

Scorched

64 pages

lab 1

lab 1

3 pages

Video

Video

22 pages

Memory

Memory

23 pages

DVoiceR

DVoiceR

29 pages

MAZE

MAZE

56 pages

PAC XON

PAC XON

13 pages

PACXON

PACXON

13 pages

MP3 Player

MP3 Player

133 pages

Load more
Download Passive Sonar
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Passive Sonar and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Passive Sonar 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?