[SciPy-user] What is missing in scipy to be a top notch environment for signal processing

Peter Wang pwang at enthought.com
Mon Nov 20 22:32:01 EST 2006


On Nov 20, 2006, at 8:02 PM, Chris Bartley wrote:

> Although I understand it is not strictly speaking part of scipy,  
> the single,
> main, thing that stops the other guys here using it is the speed of  
> plotting
> large time series datasets (100k-400k data points). Matlab plotting is
> incredibly quick! Matplotlib is great for many things, but seems to  
> suffer
> for large datasets, especially where the signal covers a lot of the  
> plot
> canvas (ie a thick, noisy line, not a thin line).

Chris,

Have you looked at chaco?  It works well with large data sets.  100k  
points is no problem at all, and it's sluggish but minimally usable  
at 1mil points.  Take a look at chaco2/examples/bigdata.py.  We use  
image-spacing caching so that interactions which don't modify the  
data can be drawn very quickly (press "z" to bring up the zoom box,  
for example), and we also clip out data which is outside the view  
area so as you zoom in, rendering speed is improved.  The docstring  
at the top of bigdata.py tells you the different interactions you can  
try out.  If you're interested, one thing you can try is changing  
line 69 to:

     plot.tools.append(PanTool(plot))

so that you can click and drag the plot around.  I have the RangeTool  
in there by default so you can see that selection-type tools which  
draw overlays are very responsive.

You can also play with simple_scatter.py; crank up the point count  
and see if it's responsive enough for your needs.  (I did 400k just  
now and it seems OK.)


-Peter




More information about the SciPy-User mailing list