[Numpy-discussion] Help Convolution with binaural filters(HRTFs)

arthur de conihout arthurdeconihout at gmail.com
Thu May 27 07:00:14 EDT 2010


Hi thanks for your answer

It s my first time i get involved in such a forum, i m a raw recruit and i
don't exactly know how to post properly.
I try to make me clearer on my project :

""But I didn't really get the point what your aim is.  As far as I
understood you want to do sth named "spacialise" with the audio, based
on the position of some person with respect to some reference point.
What means "spacialise" in this case?  I guess it's not simply a delay
for creating stereo impression?  I guess it is sth creating also a
room impression, sth like "small or large room"?""

The project consists in Binaural Synthesis( that s the given name), which is
a solution for Sound Spatialzation which is the closest to rea-life
listening.
Thus binaural encoding of spatial information is fundamentaly based on the
synthesis of localisation cues, namely the ITD(Interaural Time Difference),
ILD(Interaural Level Difference)and the SC(Spectral Cues).
A way to create an artificial sound scene is by using binaural filters.The
binaural signals are then obtained by convolving a monophonic source signal
with a pair of binaural filters that reproduce the transfer function of the
acoustic path between the source location and the listener's ears .These
transfer functions are refered to as Head Related Transfer Functions or
HRTF( their time equivalent HRIR for Head Related Impulse Response).

These HRTF can be obtained by measurement.The protocol consists in setting
very small microphones in the ears of a listenner and to send for each
direction in his acoustic sphere a white noise.Thus we obtained the impulse
response to a white noise which correponds to the acoustic signature of the
sound in a given direction(all this is made in anechoic room to consider a
neutral room without reverb).Then to "spatialize" the sound i convolve two
of these IR(left and right ear response) with the monophonic sound and i
obtain that the sound seems to come from the given direction.The decoding
part use a headphone to eliminate problem with the room response by making
the recording point of the HRTF closer to the reproduction point
(headphone).

I give you a part of the code i use for convolution  with all the wav
treatment:

import wave, struct, numpy, time
SAMPLE_RATE = 88200

*#here i got trouble if i set 44100 instead the final wav is under pitched
even if the original was 44100?*


def convo(foriginal, ffiltre):

    original = wave.open(foriginal, "r")
    filtre = wave.open(ffiltre, "r")
 * # i m creating the file in which i will write the result of the
convolution*
    filtered = wave.open("/home/arthur/Desktop/NubiaFilteredFIRnoteR.wav",
"w")
    filtered.setnchannels(1)
    filtered.setsampwidth(2)
    filtered.setframerate(SAMPLE_RATE)

*#when i unpack the monophonic and the filter i might be making a mistake
with the arguments that make my convolution not be performed on the entire
signal?
  **
#getting wav mono file info to unpack the data properly*
 nframes = original.getnframes()
    nchannels=original.getnchannels()
    original = struct.unpack_from("%dh" % nframes*nchannels,
original.readframes(nframes*nchannels))
*#i dont really understand the %dh and the s/2.0**15 but it might be my
problem  *
 original = [s / 2.0**15 for s in original]

    nframes=filtre.getnframes()
    nchannels=filtre.getnchannels()
    filtre = struct.unpack_from("%dh" % nframes*nchannels,
filtre.readframes(nframes*nchannels))
    filtre = [s / 2.0**15 for s in filtre]

    result = numpy.convolve(original, filtre)

    result = [ sample * 2.0**15 for sample in result ]
    filtered.writeframes(struct.pack('%dh' % len(result), *result))

    filtered.close()


convo(foriginal, ffiltre)


i had a look to what you sent me i m on my way understanding maybe your
initialisation tests will allow me to make difference between every  wav
formats?
i want to be able to encode every formats (16bit unsigned, 32bits)what
precautions do i have to respect in the filtering?do filter and original
must be the same or?
Thank you

AdeC


2010/5/27 Friedrich Romstedt <friedrichromstedt at gmail.com>

> 2010/5/26 arthur de conihout <arthurdeconihout at gmail.com>:
> > i try to implement a real-time convolution module refreshed by head
> > listener location (angle from a reference point).The result of the
> > convolution by binaural flters(HRTFs) allows me to spatialize a
> monophonic
> > wavfile.
>
> I suspect noone not closely involved with your subject can understand
> this.  From what you write later on I guess binaural filters are some
> LTI system?
>
> > I got trouble with this as long as my convolution doesnt seem to
> > work properly:
> > np.convolve() doesnt convolve the entire entry signal
>
> Hmm
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html#numpy-convolve
> claims that the convolution is complete.  Can you give an example of
> what you mean?
>
> Furthermore, I think the note there about scipy.signal.fftconcolve may
> be of large use for you, when you are going to convolve whole wav
> files?
>
> > ->trouble with extracting numpyarrays from the audio wav. filters and
> > monophonic entry
> > ->trouble then with encaspulating the resulting array in a proper wav
> > file...it is not read by audacity
>
> Hmmm I worked one time with wavs using the wave module, which is a
> standard module.  I didn't deal with storing wavs.  I attach the
> reading module for you.  It needs a module mesh2 to import, which I
> don't include to save traffic.  I think the code is understandable
> without it and the method .get_raw_by_frames() may already help
> solving your problem.
>
> But I didn't really get the point what your aim is.  As far as I
> understood you want to do sth named "spacialise" with the audio, based
> on the position of some person with respect to some reference point.
> What means "spacialise" in this case?  I guess it's not simply a delay
> for creating stereo impression?  I guess it is sth creating also a
> room impression, sth like "small or large room"?
>
> Friedrich
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20100527/20257067/attachment.html>


More information about the NumPy-Discussion mailing list