I am reading over and over the Section 1.2 of most understandable mic array tutorial for me .
A sound wave is dependent on time and space which can be observed in the great link.
But I can't understand why the Fourier transform of the spatially sampled data gives the directivity pattern of the microphone array (which can be seen in the section 1.3.2 of the first document)?
Is there any one who can describe this in more computer sciencist way?
Answer
Have a look at the scilab
code where I answered your other question.
That question is about a discrete aperture, but it is analogous to the question you are asking here.
For that question, the beam pattern is just given by: $$ D(\theta) = \sum_{n=0}^{N-1} w_n \exp\left(j\frac{2\pi n d}{\lambda} \sin(\theta) \right). $$ With judicious choice of parameters, you can see how this looks somewhat like the discrete Fourier transform of the weights $w_n$ which are analogous to $A_R$ in the paper.
To obtain the formulae from the paper, you need to think of the continuous aperture rather than discrete sensors.
No comments:
Post a Comment