[Numpy-discussion] "Dynamic convolution" in Numpy

Anne Archibald aarchiba at physics.mcgill.ca
Sun Jun 6 11:08:06 EDT 2010


On 6 June 2010 04:44, David Cournapeau <cournape at gmail.com> wrote:
> On Thu, Jun 3, 2010 at 7:49 PM, arthur de conihout
> <arthurdeconihout at gmail.com> wrote:
>
>> I don't know if i made myself very clear.
>> if anyone has suggestions or has already operated a dynamic filtering i
>> would be well interested.
>
> Does fade-in/fade-out actually works ? I would have thought that it
> had killed the properties of your filters ?
>
> There are two issues:
>  - how to do convolution fast
>  - how to go from one filter to the other

I think the kicker is here: what is the right way to interpolate
between filters?

If you have, or can generate, a lot of filters, then at least you can
evaluate the quality of the interpolation.

The right way to understand what kind of interpolation to is to have
some idea of the physics you're dealing with. In this case, as I
understand it, you're dealing with the auditory effects of the head,
seen from different angles. I would say that the ear senses something
integrated power in logarithmically-spaced frequency bins over roughly
twentieth-of-a-second time intervals. So the relevant effects you care
about are amplitude absorption and time delays, if they are as long as
a twentieth of a second. Doing simple linear interpolation,
unfortunately, will probably get you in trouble - imagine, for
example, that two impulse responses have the same amplitude at 440 Hz
but different phases. A linear interpolation will change the amplitude
(for example if they're 180 degrees apart it'll pass through zero).
You might do better with interpolating in polar coordinates, though
you might have phase wrapping issues. A really thorough approach might
be to take a granular-synthesis approach to the impulse responses,
breaking them up into orthogonal time-domain channels within which the
response is defined by an amplitude and a time delay, which you'd
interpolate in the natural way. I'd try polar interpolation on the
FFTs of the amplitudes first, though (since in fact it's the same
thing with the minimal possible frequency channels).

I suspect that some interpolation, however unrealistic (even linear
interpolation) is necessary or listeners may perceive sounds
"snapping" from place to place in the aural field.

> The main issue with changing filters is that your system is not LTI
> anymore. If your filters have finite impulse answer, I guess it should
> not be too much of an issue. To do convolution quickly, you need to
> use FFT, which is a bit tricky if you want to do things in real-time,
> as you need to partition the impulse response. Using "partitioned
> impulse answer" as keywords should give you plenty of references on
> how to do it,

As far as convolution, as David says, take a look at existing
algorithms and maybe even music software - there's a trade-off between
the n^2 computation of a brute-force FIR filter and the delay
introduced by an FFT approach, but on-the-fly convolution is a
well-studied problem.

Anne

> David
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>



More information about the NumPy-Discussion mailing list