From hmgaudecker at gmail.com Mon Jul 1 16:41:47 2013 From: hmgaudecker at gmail.com (Hans-Martin v. Gaudecker) Date: Mon, 01 Jul 2013 22:41:47 +0200 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: <51D1E98B.6060009@gmail.com> I have been largely working on Python 3 for two years now. I figured that starting long-term-projects with Python 2 was not worth it anymore. After I got some of those going, I gradually ported most of my other projects. Overall it has worked well for me -- with some packages I had to dig deeper into compiling things than I would have liked to, but that seems to be over now as I am not using many exotic things (mostly NumPy, SciPy, Matplotlib, Pandas, Statsmodels, some database stuff). I am currently teaching a software-carpentry-inspired course to about 30 economics MSc students and everybody uses an Anaconda Python 3.3 environment. Again, this has worked very well. The list of included packages is probably a bit too short (especially when compared to Anaconda Python 2.7) as to recommend it by default to newcomers. Maybe this discussion will help to have the gap closed (even) faster. On 01.07.13 19:00, scipy-user-request at scipy.org wrote: > 1. Re: SciPy ecosystem and Python 3 (Ralf Gommers) > > I will be able to install Anaconda or another distribution. > All the basic examples in Python and numpy/scipy docs will work. But I > don't work in a vacuum, so I'll find out at some later stage that some code > that my co-workers wrote depends on version (current minus 2) of some > package that only supports 3.x in version (current). This should be the > exception and not the norm before recommending 3.x imho. I tend to disagree. If I arrive as a newcomer in a work environment where everybody else uses Python 2, I listen to my co-workers, use that, and don't care much about what the website recommends. The website is probably more relevant for people where a vacuum describes the environment fairly well. > Also, if many of the active developers haven't yet moved to 3.x (and yes > that includes me) then it's most definitely too early to recommend said > move to people who aren't very familiar with Python yet. I would argue the other way round: Using Python 3 straight away will avoid a move for newcomers altogether. For long-time Python 2 users, the switching costs are particularly large -- which may explain the reluctance of many developers. But for newcomers, these costs could be avoided entirely (yes, they can be pretty small using 2to3 -- but that's not something one would want to explain to a newcomer in the first few hours, see Thomas' original post). My 2c, Hans-Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Mon Jul 1 19:55:24 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 2 Jul 2013 00:55:24 +0100 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On 30 June 2013 20:04, Ralf Gommers wrote: > That's not quite what I meant. Even on a work pc on which I don't have > admin rights I will be able to install Anaconda or another distribution. > All the basic examples in Python and numpy/scipy docs will work. But I > don't work in a vacuum, so I'll find out at some later stage that some code > that my co-workers wrote depends on version (current minus 2) of some > package that only supports 3.x in version (current). This should be the > exception and not the norm before recommending 3.x imho. Again, though, I think your coworkers are more likely to have written code which expects Python 2, than Python-3-compatible code which relies on an older version of a particular library. But that's a chicken and egg problem, and if we always pointed newcomers at the option most likely to preserve compatibility with prior code, then we'd never have started using Python at all. Hans also has a good point: if you're working in a group with a Python codebase, they should show you how to set up the preferred environment for that. I also imagine that we'll need to maintain a warning for some time after we start recommending Python 3, along the lines of "If you find code that doesn't work, it might be that it was never updated to run on Python 3." That's not ideal, but I think being able to point newcomers at the 'latest and greatest' version by default will still be an important improvement. To my mind, the crucial prerequisite is getting robust Python 3 support in the packages that we (the open source SciPy community) develop. Someone has added quite a long list to the Etherpad (Thanks!), but some of them seem quite specialist, e.g. I've never even heard of Gamera or kwant before. I don't think we should hold the recommendation on every specific package that we can find, so the question is which of those packages are important. Obviously that's somewhat subjective, so here's a couple of possible criteria to debate: A project is 'important' if - It's relevant outside one specific field of study (i.e. we wouldn't block the general recommendation on a package specific to, say, quantum physics), and - It's recommended by blog posts/tutorials/textbooks independent of the project and its main authors. Here's the pad again: https://etherpad.mozilla.org/JdAHGQihei Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jul 1 20:15:41 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Jul 2013 20:15:41 -0400 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On Mon, Jul 1, 2013 at 7:55 PM, Thomas Kluyver wrote: > On 30 June 2013 20:04, Ralf Gommers wrote: >> >> That's not quite what I meant. Even on a work pc on which I don't have >> admin rights I will be able to install Anaconda or another distribution. All >> the basic examples in Python and numpy/scipy docs will work. But I don't >> work in a vacuum, so I'll find out at some later stage that some code that >> my co-workers wrote depends on version (current minus 2) of some package >> that only supports 3.x in version (current). This should be the exception >> and not the norm before recommending 3.x imho. > > > Again, though, I think your coworkers are more likely to have written code > which expects Python 2, than Python-3-compatible code which relies on an > older version of a particular library. But that's a chicken and egg problem, > and if we always pointed newcomers at the option most likely to preserve > compatibility with prior code, then we'd never have started using Python at > all. > > Hans also has a good point: if you're working in a group with a Python > codebase, they should show you how to set up the preferred environment for > that. I also imagine that we'll need to maintain a warning for some time > after we start recommending Python 3, along the lines of "If you find code > that doesn't work, it might be that it was never updated to run on Python > 3." That's not ideal, but I think being able to point newcomers at the > 'latest and greatest' version by default will still be an important > improvement. > > To my mind, the crucial prerequisite is getting robust Python 3 support in > the packages that we (the open source SciPy community) develop. Someone has > added quite a long list to the Etherpad (Thanks!), but some of them seem > quite specialist, e.g. I've never even heard of Gamera or kwant before. I > don't think we should hold the recommendation on every specific package that > we can find, so the question is which of those packages are important. > > Obviously that's somewhat subjective, so here's a couple of possible > criteria to debate: A project is 'important' if > - It's relevant outside one specific field of study (i.e. we wouldn't block > the general recommendation on a package specific to, say, quantum physics), > and > - It's recommended by blog posts/tutorials/textbooks independent of the > project and its main authors. I think it would be better to have a central list where users/developers can add information about field specific packages. It won't help those users if the scientific python core is available but some crucial packages in their field are not available on python 3. And not all fields have big communities that can do it on their own. Josef > > Here's the pad again: https://etherpad.mozilla.org/JdAHGQihei > > Thanks, > Thomas > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From klonuo at gmail.com Mon Jul 1 20:46:27 2013 From: klonuo at gmail.com (klo uo) Date: Tue, 2 Jul 2013 02:46:27 +0200 Subject: [SciPy-User] Curious about the contents of __config__.py Message-ID: Hi, I downloaded latest installers for numpy and scipy from sourceforge, as I wasn't feeling ambitious to build from source. show_config() lists library dirs as: numpy: 'library_dirs': ['C:\\local\\lib\\atlas\\sse3'] scipy: 'library_dirs': ['C:\\local\\lib\\yop\\sse3'] macros: numpy: 'define_macros': [('NO_ATLAS_INFO', -1)] scipy: 'define_macros': [('ATLAS_INFO', '"\\"?.?.?\\""')] As I had already compiled ATLAS libraries, I edited all __config__.py files and set this: 'library_dirs': ['C:\\lib\\ATLAS3.6.0_P4SSE2'] 'define_macros': [('ATLAS_INFO', '"\\"3.6.0\\""')] I'm curious what is this good for? Is it only about packaging? Does other python packages depend on these variables set in __config__.py files, and is it fine that I did edit the files to reflect my system? From josef.pktd at gmail.com Mon Jul 1 21:08:26 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Jul 2013 21:08:26 -0400 Subject: [SciPy-User] Curious about the contents of __config__.py In-Reply-To: References: Message-ID: On Mon, Jul 1, 2013 at 8:46 PM, klo uo wrote: > Hi, > > I downloaded latest installers for numpy and scipy from sourceforge, > as I wasn't feeling ambitious to build from source. > > show_config() > > lists library dirs as: > > numpy: 'library_dirs': ['C:\\local\\lib\\atlas\\sse3'] > scipy: 'library_dirs': ['C:\\local\\lib\\yop\\sse3'] > > macros: > > numpy: 'define_macros': [('NO_ATLAS_INFO', -1)] > scipy: 'define_macros': [('ATLAS_INFO', '"\\"?.?.?\\""')] > > As I had already compiled ATLAS libraries, I edited all __config__.py > files and set this: > > 'library_dirs': ['C:\\lib\\ATLAS3.6.0_P4SSE2'] > 'define_macros': [('ATLAS_INFO', '"\\"3.6.0\\""')] > > I'm curious what is this good for? Is it only about packaging? Does > other python packages depend on these variables set in __config__.py > files, and is it fine that I did edit the files to reflect my system? AFAIK, from when I was still building scipy These are the files scipy was build against by the build script for the scipy binaries. Since the libraries are statically linked, it doesn't matter what other ATLAS you have on your computer, they are not the ones used by this scipy installation, and the changes to the config info won't reflect the "real" libraries. for example, your scipy uses the sse3 libraries, while your ATLAS looks like sse2 Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From klonuo at gmail.com Mon Jul 1 21:22:10 2013 From: klonuo at gmail.com (klo uo) Date: Tue, 2 Jul 2013 03:22:10 +0200 Subject: [SciPy-User] Curious about the contents of __config__.py In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 3:08 AM, Josef wrote: > > AFAIK, from when I was still building scipy > > These are the files scipy was build against by the build script for > the scipy binaries. Since the libraries are statically linked, it > doesn't matter what other ATLAS you have on your computer, they are > not the ones used by this scipy installation, and the changes to the > config info won't reflect the "real" libraries. > > for example, your scipy uses the sse3 libraries, while your ATLAS > looks like sse2 So my edit was useless. Good to know. Thanks for your fast reply From josef.pktd at gmail.com Mon Jul 1 21:28:12 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Jul 2013 21:28:12 -0400 Subject: [SciPy-User] Curious about the contents of __config__.py In-Reply-To: References: Message-ID: On Mon, Jul 1, 2013 at 9:22 PM, klo uo wrote: > On Tue, Jul 2, 2013 at 3:08 AM, Josef wrote: >> >> AFAIK, from when I was still building scipy >> >> These are the files scipy was build against by the build script for >> the scipy binaries. Since the libraries are statically linked, it >> doesn't matter what other ATLAS you have on your computer, they are >> not the ones used by this scipy installation, and the changes to the >> config info won't reflect the "real" libraries. >> >> for example, your scipy uses the sse3 libraries, while your ATLAS >> looks like sse2 > > > So my edit was useless. Good to know. to answer the last part as for purpose: The information is useful as debug information when there are problems with one of the libraries. (like "Please report your show_config if you have a problem with xxx") However, I don't know if any installers check the config info to see whether packages are fortran compatible, my guess is they don't. for example, mixing gcc numpy with mkl scipy should cause some errors. I usually pay enough attention which installers I use, that I never found out that information. (official binaries versus Gohlke binaries) Josef > > Thanks for your fast reply > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at gmail.com Tue Jul 2 02:47:42 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 2 Jul 2013 08:47:42 +0200 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 1:55 AM, Thomas Kluyver wrote: > On 30 June 2013 20:04, Ralf Gommers wrote: > >> That's not quite what I meant. Even on a work pc on which I don't have >> admin rights I will be able to install Anaconda or another distribution. >> All the basic examples in Python and numpy/scipy docs will work. But I >> don't work in a vacuum, so I'll find out at some later stage that some code >> that my co-workers wrote depends on version (current minus 2) of some >> package that only supports 3.x in version (current). This should be the >> exception and not the norm before recommending 3.x imho. > > > Again, though, I think your coworkers are more likely to have written code > which expects Python 2, than Python-3-compatible code which relies on an > older version of a particular library. But that's a chicken and egg > problem, and if we always pointed newcomers at the option most likely to > preserve compatibility with prior code, then we'd never have started using > Python at all. > I understand it's a chicken and egg problem, but newcomers are not the right group to solve that. You want to give them the recommendation that helps them get started with the least amount of trouble. One bad experience (and having to do some serious debugging or even downgrade to 2.x is bad) can be enough to chase them back to Matlab. We'll get there eventually, but only when a good portion (say 50%) of existing users and devs have moved. > Hans also has a good point: if you're working in a group with a Python > codebase, they should show you how to set up the preferred environment for > that. > If only the real world was that simple:) > I also imagine that we'll need to maintain a warning for some time after > we start recommending Python 3, along the lines of "If you find code that > doesn't work, it might be that it was never updated to run on Python 3." > No no no. That's a terrible sentence to write. Imagine you reading that if you move to new language X. That's not ideal, but I think being able to point newcomers at the 'latest > and greatest' version by default will still be an important improvement. > Please keep in mind that it's much more important to you, as an active dev who cares about 3.x adoption, then to them. All newcomers are getting for now is some compatibility issues and strings they don't understand. On the upside they don't have to move a few years later, but the business case is thin. Cheers, Ralf P.S. I do agree with you on where we need to go, there's just no need to be in a hurry imho -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jul 2 04:42:58 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Jul 2013 09:42:58 +0100 Subject: [SciPy-User] At.: question about refresh numpy array in a for-cycle In-Reply-To: <1372287629.89197.YahooMailNeo@web142306.mail.bf1.yahoo.com> References: <1372287629.89197.YahooMailNeo@web142306.mail.bf1.yahoo.com> Message-ID: On Thu, Jun 27, 2013 at 12:00 AM, Jos? Luis Mietta < joseluismietta at yahoo.com.ar> wrote: > > Hi experts! > Im writing a code with a numpy array L, the numpy matrix M and the next script: > > for x in L: > for l in srange(N): > z= l in L > if z is False and M[x,l] != 0: > L=np.append(L,l) > > > here, in the end of the cycle, new elements are incorporated to the array 'L'. > I want these new elements be considered as 'x' index in the cycle. > When I execute the script I see that only the 'originals' elements of L are considered as 'x'. > > How can i fix it? There are a couple of things going on here. First, "for x in L:" always iterates over the object initially assigned to the name "L". If that name gets reassigned to a different object during the course of the loop, it won't change the iteration. That's just how Python works. Second, if you can modify the object in-place in the loop, that will affect the iteration, but this is usually a bad idea. It becomes very hard to reason about what is going to happen, and you will usually get it wrong. np.append() cannot modify its array argument in-place. numpy arrays are generally of a fixed size throughout their lifetime for various reasons. That's why you had to reassign the result of np.append() back to the name L. You need to use a Python list or some other extendable object in order to modify the iteration in-place. Generally, np.append() is a sign that you need to use some other data structure. from collections import deque # Convert to a list object that is efficient for appending. # We will accumulate results in this list. L = list(L) # Make a First-In-First-Out queue out of the items. # We will pull work items from this queue. queue = deque(L) while queue: x = queue.popleft() for l in srange(N): if l not in L and M[x,l] != 0: # Add it to the results. L.append(l) # And to the work queue for further processing. queue.append(l) # I guess we need this back as an array again. L = np.array(L) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From polish at dtgroup.com Tue Jul 2 10:38:23 2013 From: polish at dtgroup.com (Nathaniel Polish) Date: Tue, 02 Jul 2013 10:38:23 -0400 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: <8E1D9210262E0EB93059C088@[192.168.1.131]> I am mostly a lurker here but as a practicing computer scientist and engineer, I thought I'd offer my two cents. We face this issue when getting into any system that is new to us. Which version of unix? C, C++, Java, PHP, etc...All have their issues with respect to versions. All such systems have life cycles. Its always best to get your work done within a single product cycle. Shifting in the middle of project is usually deadly. Anyone showing up to numpy etc now to get specific work done should really be directed to 2.7. That's where the maturity is. 3 is where the world is going but the time scale is uncertain. Pushing new users to 3 just to get a community is a bad idea. The experienced folks should be the ones establishing the new version. I know that everyone is trying... I faced this two years ago when I starting working with numpy. It was confusing. However I quickly realized as a CS person what was going on. 2.7 was solid and mature. Errors that I found could almost always be assumed to be my fault. Everything I was doing could be assumed to have been tried by someone else before. Version 3 had none of those attributes so I stuck to 2.7. This is kind of the way that I feel about C and Unix. Solid and mature. Nothing much new. For what its worth, if the community does not make the switch to version 3 reasonably soon, it will risk losing people to other systems that ARE successfully moving through new versions. Its hard with such a far flung community but that's what we are. There, I will now retreat to the shadows from whence I came. You folks do amazing work and I am mostly in awe of its quality. --On Tuesday, July 02, 2013 8:47 AM +0200 Ralf Gommers wrote: > > > > > > > On Tue, Jul 2, 2013 at 1:55 AM, Thomas Kluyver wrote: > > > > > > On 30 June 2013 20:04, Ralf Gommers wrote: > > That's not quite what I meant. Even on a work pc on which I don't have > admin rights I will be able to install Anaconda or another distribution. > All the basic examples in Python and numpy/scipy docs will work. But I > don't work in a vacuum, so I'll find out at some later stage that some > code that my co-workers wrote depends on version (current minus 2) of > some package that only supports 3.x in version (current). This should be > the exception and not the norm before recommending 3.x imho. > > > > Again, though, I think your coworkers are more likely to have written > code which expects Python 2, than Python-3-compatible code which relies > on an older version of a particular library. But that's a chicken and egg > problem, and if we always pointed newcomers at the option most likely to > preserve compatibility with prior code, then we'd never have started > using Python at all. > > > > > I understand it's a chicken and egg problem, but newcomers are not the > right group to solve that. You want to give them the recommendation that > helps them get started with the least amount of trouble. One bad > experience (and having to do some serious debugging or even downgrade to > 2.x is bad) can be enough to chase them back to Matlab. > > > We'll get there eventually, but only when a good portion (say 50%) of > existing users and devs have moved. > > ? > > > > > Hans also has a good point: if you're working in a group with a Python > codebase, they should show you how to set up the preferred environment > for that. > > > > > If only the real world was that simple:) > ? > > > > I also imagine that we'll need to maintain a warning for some time after > we start recommending Python 3, along the lines of "If you find code that > doesn't work, it might be that it was never updated to run on Python 3." > > > > > No no no. That's a terrible sentence to write. Imagine you reading that > if you move to new language X. > > > > > That's not ideal, but I think being able to point newcomers at the > 'latest and greatest' version by default will still be an important > improvement. > > > > > Please keep in mind that it's much more important to you, as an active > dev who cares about 3.x adoption, then to them. All newcomers are getting > for now is some compatibility issues and strings they don't understand. > On the upside they don't have to move a few years later, but the business > case is thin. > > > Cheers, > Ralf > > > P.S. I do agree with you on where we need to go, there's just no need to > be in a hurry imho > From takowl at gmail.com Tue Jul 2 11:00:43 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 2 Jul 2013 16:00:43 +0100 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On 2 July 2013 07:47, Ralf Gommers wrote: > Please keep in mind that it's much more important to you, as an active dev > who cares about 3.x adoption, then to them. All newcomers are getting for > now is some compatibility issues and strings they don't understand. On the > upside they don't have to move a few years later, but the business case is > thin. > I'm not just doing this to cheerlead Python 3 adoption. Many of us have seen newcomers being confused by the split. I don't have references handy, but I've heard about courses that have asked people to preinstall Python, and despite careful instructions, people have turned up with a mixture of Python 2 and Python 3, which then wastes valuable time while everyone gets to the same starting point. Discussion sites see regular 'should I use 2 or 3' threads. And it's easy to imagine potential users who're evaluating Python against alternative solutions, and get put off by the 2/3 split, though we probably don't hear from them. Again, I'm not saying that we should promote Python 3 now - there's still some way to go. I'm trying to define how far it is, and how we can measure it. You suggest we should wait until maybe 50% of existing users and devs have switched. I wouldn't wait that long, because I think it's fine for new users to change quicker than old users, but this is the sort of criterion I want to discuss. How would we go about estimating how many users/devs use Python 3? It can seem like no-one is, but my conversations at SciPy, and bug reports on IPython, suggest that that's not entirely true. Christoph (& Josef said something similar) > I think a webpage summarising Python version compatibility would be a great resource. That's an interesting idea. There are already sites out there which list Python 3 support for PyPI packages, but there's certainly room for something more detailed. Specifically, it could know about: - Which minor Python versions does X support (e.g. 2.6, 2.7, 3.3) - How has this support changed in recent releases of X - Is X version y packaged for Python version z in Debian/Macports/Anaconda/etc. Is anybody interested in creating this and keeping it updated? We could use PyPI classifiers as a starting point, but there would need to be extra information layered on top of that. However, we shouldn't overstate the importance of this for newcomers: you know which field you're working in, but you often don't know which packages you're going to need until you've already written quite a bit of code. So you can't just look up the packages you'll need before you start with SciPy. Nathaniel: > I faced this two years ago when I starting working with numpy. It was > confusing. However I quickly realized as a CS person what was going on. > 2.7 was solid and mature. Precisely. However, many of our target audience aren't CS people, and will only see that they're being asked to use an 'old' version for some reason. Also, the Py3 ecosystem is much more mature than it was two years ago. At that time, it was easy to point to inarguably important packages like matplotlib as a reason to start with Python 2. Today, that's still possible in specific fields, but it's increasingly hard to find Python-2-only packages that new SciPy users in general are likely to need. Best wishes, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Jul 2 11:10:57 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 2 Jul 2013 16:10:57 +0100 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: Hi, On Tue, Jul 2, 2013 at 4:00 PM, Thomas Kluyver wrote: > On 2 July 2013 07:47, Ralf Gommers wrote: >> >> Please keep in mind that it's much more important to you, as an active dev >> who cares about 3.x adoption, then to them. All newcomers are getting for >> now is some compatibility issues and strings they don't understand. On the >> upside they don't have to move a few years later, but the business case is >> thin. > > > I'm not just doing this to cheerlead Python 3 adoption. Many of us have seen > newcomers being confused by the split. I don't have references handy, but > I've heard about courses that have asked people to preinstall Python, and > despite careful instructions, people have turned up with a mixture of Python > 2 and Python 3, which then wastes valuable time while everyone gets to the > same starting point. Discussion sites see regular 'should I use 2 or 3' > threads. And it's easy to imagine potential users who're evaluating Python > against alternative solutions, and get put off by the 2/3 split, though we > probably don't hear from them. Agreeing with Thomas: Most of us when starting with a new software stack, look for the latest version. I guess this is because it's fun to use the latest stuff, and because it's annoying learning habits for stuff that will soon be deprecated or raise an error. We could explain why that should not be the case for scientific python, but I imagine that new users will be a little puzzled and maybe worried that we should be holding to an old version so long after the release of the new. I do believe we should have slight bias towards 3 rather 2 for the sake of the health of the overall python ecosystem, which will have to move. I don't know how we would know when we should 'recommend' 3 though. Best, Matthew From parrenin.ujf at gmail.com Tue Jul 2 11:20:35 2013 From: parrenin.ujf at gmail.com (=?ISO-8859-1?Q?Fr=E9d=E9ric_Parrenin?=) Date: Tue, 2 Jul 2013 17:20:35 +0200 Subject: [SciPy-User] workbooks in matplotlib Message-ID: Dear all, Did anybody ever suggested to organize figures in workbooks using tabs? If in a project you create many figures, each one opens a new window and it quickly becomes inconvenient. Best regards, Fr?d?ric Parrenin -------------- next part -------------- An HTML attachment was scrubbed... URL: From parrenin.ujf at gmail.com Tue Jul 2 11:24:59 2013 From: parrenin.ujf at gmail.com (=?ISO-8859-1?Q?Fr=E9d=E9ric_Parrenin?=) Date: Tue, 2 Jul 2013 17:24:59 +0200 Subject: [SciPy-User] a=b operation in numpy Message-ID: Dear all, Does anybody know what is the rational in defining for the 'a=b' operation a pointer copy and not a complete copy? It is inconsistent from an algebraic point of view, since 'a=b' is not the same as 'a=b+0'. And it is also confusing for starters. Best regards, Fr?d?ric Parrenin -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Jul 2 11:26:09 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Jul 2013 11:26:09 -0400 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 11:00 AM, Thomas Kluyver wrote: > On 2 July 2013 07:47, Ralf Gommers wrote: >> >> Please keep in mind that it's much more important to you, as an active dev >> who cares about 3.x adoption, then to them. All newcomers are getting for >> now is some compatibility issues and strings they don't understand. On the >> upside they don't have to move a few years later, but the business case is >> thin. > > > I'm not just doing this to cheerlead Python 3 adoption. Many of us have seen > newcomers being confused by the split. I don't have references handy, but > I've heard about courses that have asked people to preinstall Python, and > despite careful instructions, people have turned up with a mixture of Python > 2 and Python 3, which then wastes valuable time while everyone gets to the > same starting point. Discussion sites see regular 'should I use 2 or 3' > threads. And it's easy to imagine potential users who're evaluating Python > against alternative solutions, and get put off by the 2/3 split, though we > probably don't hear from them. > > Again, I'm not saying that we should promote Python 3 now - there's still > some way to go. I'm trying to define how far it is, and how we can measure > it. You suggest we should wait until maybe 50% of existing users and devs > have switched. I wouldn't wait that long, because I think it's fine for new > users to change quicker than old users, but this is the sort of criterion I > want to discuss. > > How would we go about estimating how many users/devs use Python 3? It can > seem like no-one is, but my conversations at SciPy, and bug reports on > IPython, suggest that that's not entirely true. > > Christoph (& Josef said something similar) >> I think a webpage summarising Python version compatibility would be a >> great resource. > > That's an interesting idea. There are already sites out there which list > Python 3 support for PyPI packages, but there's certainly room for something > more detailed. Specifically, it could know about: > - Which minor Python versions does X support (e.g. 2.6, 2.7, 3.3) > - How has this support changed in recent releases of X > - Is X version y packaged for Python version z in > Debian/Macports/Anaconda/etc. > > Is anybody interested in creating this and keeping it updated? We could use > PyPI classifiers as a starting point, but there would need to be extra > information layered on top of that. > > However, we shouldn't overstate the importance of this for newcomers: you > know which field you're working in, but you often don't know which packages > you're going to need until you've already written quite a bit of code. So > you can't just look up the packages you'll need before you start with SciPy. We have a http://scipy.org/topical-software.html which is partially maintained, and maintained almost only by the community. A similar information for the python 3 status would be a good starting point for users to decide on the python version. > > Nathaniel: > >> I faced this two years ago when I starting working with numpy. It was >> confusing. However I quickly realized as a CS person what was going on. >> 2.7 was solid and mature. > > Precisely. However, many of our target audience aren't CS people, and will > only see that they're being asked to use an 'old' version for some reason. > > Also, the Py3 ecosystem is much more mature than it was two years ago. At > that time, it was easy to point to inarguably important packages like > matplotlib as a reason to start with Python 2. Today, that's still possible > in specific fields, but it's increasingly hard to find Python-2-only > packages that new SciPy users in general are likely to need. (started to write this in response to Nathaniel but now fits better here) numpy, scipy, pandas, statsmodels have been available for python 3 for more than 2 years now. We usually get compatibility with new python 3 versions as soon as they come out. (aside: python 3 only http://biogeme.epfl.ch/doc/install.html for transportation modelling ) I think the recommendation should depend on the setting a user is in. I'm mostly with Hans-Martin that students that don't move immediately into an established production setting, or new users without an established 2.x peer group should start with python 3, especially if it only takes another year until the stragglers are also available on python 3. (spyder is on it's way) my impression from the statsmodels conversion two years ago: numerical code, all our base algorithms, required almost no changes in moving to python 3, the main adjustments were in input and output. So, I think, the numerical code is essentially as reliable on python 3 as on python 2. (of course this build largely on top of the changes that numpy and scipy made.) About developers: I'm working at the tail end of the supported dependencies of statsmodels, so I can catch backwards compatibility problems. But as a user, I would prefer to work with the "latest and greatest" and take advantage of new features instead of lagging several years behind. Cheers, Josef > > Best wishes, > Thomas > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Jul 2 11:33:40 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Jul 2013 11:33:40 -0400 Subject: [SciPy-User] a=b operation in numpy In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 11:24 AM, Fr?d?ric Parrenin wrote: > Dear all, > > Does anybody know what is the rational in defining for the 'a=b' operation a > pointer copy and not a complete copy? > It is inconsistent from an algebraic point of view, since 'a=b' is not the > same as 'a=b+0'. > And it is also confusing for starters. python variables are references, fortunately, we don't get copying of data all the time like some other (special purpose) languages http://forums.udacity.com/questions/8767/python-variables-are-they-really-pointers Josef > > Best regards, > > Fr?d?ric Parrenin > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Tue Jul 2 11:54:19 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 2 Jul 2013 09:54:19 -0600 Subject: [SciPy-User] SciPy ecosystem and Python 3 In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 9:10 AM, Matthew Brett wrote: > Hi, > > On Tue, Jul 2, 2013 at 4:00 PM, Thomas Kluyver wrote: > > On 2 July 2013 07:47, Ralf Gommers wrote: > >> > >> Please keep in mind that it's much more important to you, as an active > dev > >> who cares about 3.x adoption, then to them. All newcomers are getting > for > >> now is some compatibility issues and strings they don't understand. On > the > >> upside they don't have to move a few years later, but the business case > is > >> thin. > > > > > > I'm not just doing this to cheerlead Python 3 adoption. Many of us have > seen > > newcomers being confused by the split. I don't have references handy, but > > I've heard about courses that have asked people to preinstall Python, and > > despite careful instructions, people have turned up with a mixture of > Python > > 2 and Python 3, which then wastes valuable time while everyone gets to > the > > same starting point. Discussion sites see regular 'should I use 2 or 3' > > threads. And it's easy to imagine potential users who're evaluating > Python > > against alternative solutions, and get put off by the 2/3 split, though > we > > probably don't hear from them. > > Agreeing with Thomas: > > Most of us when starting with a new software stack, look for the > latest version. I guess this is because it's fun to use the latest > stuff, and because it's annoying learning habits for stuff that will > soon be deprecated or raise an error. > Good point. I always used to download the latest and greatest version of everything in the expectation that it would be better ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Tue Jul 2 23:54:03 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Tue, 2 Jul 2013 20:54:03 -0700 Subject: [SciPy-User] Custom array serialization Message-ID: I'm working on a game project; more specifically, right now I'm working on saving and loading the game. As a result, I need to serialize the game state to a file, and deserialize it later. To pre-empt some responses, I spent a lot of time thinking about this before starting, and came to the conclusion that pickle and other similar automatic [de]serialization libraries were not suitable for this problem. The sticking point is that these libraries invariably let you put code into the serialized object, which code is then executed when you deserialize it. As a result, if you have the deserialization routine in your code, then you have a security breach. I would rather my users be able to distribute savefiles without worrying that one of them has been sabotaged to do something malicious. Instead, I'm manually serializing to JSON, and manually deserializing. It's actually working decently well so far. I've hit one minor sticking point though: numpy array serialization. Of course I'm aware of numpy.tostring(), but that doesn't preserve type information. And I don't know of a good way to serialize the type and then deserialize it later. In other words, basically I want some way to do this: def serializeArray(data): type = convert data.dtype to a string? dataStr = data.tostring() return "%s:%s" % (type, dataStr) and then later def deserializeArray(dataString): type, dataStr = dataString.split(':') somehow convert type to a numpy.dtype object? return numpy.fromstring(dataStr, dtype = type) How do I do this? I assume it must be possible. I can hack around it by only supporting a limited number of types that I manually convert to/from strings (e.g. if dtype is float64, then I store "float64" as the type string), but that makes the code ugly. Any advice would be appreciated. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 3 05:21:26 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 3 Jul 2013 10:21:26 +0100 Subject: [SciPy-User] Custom array serialization In-Reply-To: References: Message-ID: On Wed, Jul 3, 2013 at 4:54 AM, Chris Weisiger wrote: > > I'm working on a game project; more specifically, right now I'm working on saving and loading the game. As a result, I need to serialize the game state to a file, and deserialize it later. > > To pre-empt some responses, I spent a lot of time thinking about this before starting, and came to the conclusion that pickle and other similar automatic [de]serialization libraries were not suitable for this problem. The sticking point is that these libraries invariably let you put code into the serialized object, which code is then executed when you deserialize it. As a result, if you have the deserialization routine in your code, then you have a security breach. I would rather my users be able to distribute savefiles without worrying that one of them has been sabotaged to do something malicious. > > Instead, I'm manually serializing to JSON, and manually deserializing. It's actually working decently well so far. I've hit one minor sticking point though: numpy array serialization. Of course I'm aware of numpy.tostring(), but that doesn't preserve type information. And I don't know of a good way to serialize the type and then deserialize it later. Use the .npy format that np.save() uses: https://github.com/numpy/numpy/blob/master/numpy/lib/format.py The write_array() and read_array() functions are the ones you would use. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Wed Jul 3 09:54:21 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 03 Jul 2013 15:54:21 +0200 Subject: [SciPy-User] determining success of scipy.optimize.basinhopping() Message-ID: <51D42D0D.4080005@ntc.zcu.cz> Hi! How can I obtain the local minimizer convergence success/failure from the basinhopping() function? I would like to know that at least one local minimization (and hence the global one) converged. The result structure contains only the 'message' attribute, but not the 'success' attribute. I could parse the message, but it is not exactly what I need to know. Thanks, r. From franz_lambert_engel at yahoo.de Thu Jul 4 03:50:15 2013 From: franz_lambert_engel at yahoo.de (Franz Engel) Date: Thu, 4 Jul 2013 08:50:15 +0100 (BST) Subject: [SciPy-User] Reduction of spatial with small differences Message-ID: <1372924215.90932.YahooMailNeo@web172205.mail.ir2.yahoo.com> Hello, I have an numpy.array with 3D points. Some of the points are very close to each other. Now I want reduce points they have a distance smaller than x. For example rawArray [[1 2 2] ?[1 3 3] ?[1 4 4] ?[1 4 4] ?[1 5 5] ?[1 6 6] ?[1 6.1 6] ?[1 6.1 6.1] ?[1 6.2 6.1]] make reduction [[1 2 2] ?[1 3 3] ?[1 4 4] ?[1 5 5] ?[1 6.1 6.1]] Is there a common way to do that? Or is thera a good keyword what I can looking for? Regards, ? ? Franz -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Jul 4 17:20:19 2013 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 4 Jul 2013 17:20:19 -0400 Subject: [SciPy-User] Reduction of spatial with small differences In-Reply-To: <1372924215.90932.YahooMailNeo@web172205.mail.ir2.yahoo.com> References: <1372924215.90932.YahooMailNeo@web172205.mail.ir2.yahoo.com> Message-ID: > I have an numpy.array with 3D points. Some of the points are very close to each other. Now I want reduce points they have a distance smaller than x. You'll probably need to specify your problem a bit more clearly: what if you have an evenly-spaced array of points that are 0.9x distance apart? Should that be reduced to a single point? Depending on how pathological your data are, this could basically be a clustering problem. If your data are guaranteed non-pathological (all points distantly spaced except small clusters spaced within x) then all you need to do is find those clusters, which you could do by calculating the full distance matrix (exact) or with a kd-tree (fast), both available in scipy.spatial. Zach From vaggi.federico at gmail.com Fri Jul 5 08:02:05 2013 From: vaggi.federico at gmail.com (federico vaggi) Date: Fri, 5 Jul 2013 14:02:05 +0200 Subject: [SciPy-User] Reduction of spatial with small differences Message-ID: The easiest way is probably to cluster your points, then pick the centroids of the clusters as your new points. http://docs.scipy.org/doc/scipy/reference/cluster.html has the functions that you want. Alternatively, if you want a more naive implementation: You can do something like this: from scipy.spatial.distance import pdist, squareform X= [[1, 2, 2], [1, 3, 3], [1, 4, 4], [1, 4, 4], [1, 5, 5], [1, 6, 6], [1, 6.1, 6], [1, 6.1, 6.1], [1, 6.2, 6.1]] eps = 0.2 close_idx = squareform(pdist(X, 'euclidean'))pdist(X, 'euclidean')) < eps (you don't have to work with the squareform, but it's much much easier). and that gives you the indices of the pairs that are close enough to each other. However - this approach becomes very complicated if you have situations where multiple clusters of points are close to each other. Message: 1 > Date: Thu, 4 Jul 2013 08:50:15 +0100 (BST) > From: Franz Engel > Subject: [SciPy-User] Reduction of spatial with small differences > To: "scipy-user at scipy.org" > Message-ID: > <1372924215.90932.YahooMailNeo at web172205.mail.ir2.yahoo.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hello, > > I have an numpy.array with 3D points. Some of the points are very close to > each other. Now I want reduce points they have a distance smaller than x. > For example > rawArray > [[1 2 2] > ?[1 3 3] > ?[1 4 4] > ?[1 4 4] > ?[1 5 5] > ?[1 6 6] > ?[1 6.1 6] > ?[1 6.1 6.1] > ?[1 6.2 6.1]] > > make reduction > [[1 2 2] > ?[1 3 3] > ?[1 4 4] > ?[1 5 5] > ?[1 6.1 6.1]] > > Is there a common way to do that? Or is thera a good keyword what I can > looking for? > > Regards, > ? ? Franz > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20130704/722294a2/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 119, Issue 7 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremysanders.net Tue Jul 9 03:31:59 2013 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Tue, 09 Jul 2013 08:31:59 +0100 Subject: [SciPy-User] ANN: Veusz 1.18 Message-ID: Veusz 1.18 ---------- http://home.gna.org/veusz/ Veusz is a scientific plotting package. It is designed to produce publication-ready Postscript/PDF/SVG output. Graphs are built-up by combining plotting widgets. The user interface aims to be simple, consistent and powerful. Veusz provides GUI, Python module, command line, scripting, DBUS and SAMP interfaces to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as Internet sockets or other programs. Changes in 1.18: * Add support for dataset expressions when plotting * Add axis-function widget for plotting axes which have a scale given by a function, or are linked to a different axis via a function * Add stepped colour maps * Support editing multiple datasets simultaneously in editor * Add setting to fix aspect-ratio of graphs * Add 'vcentre' line step mode for vertical step plots * Add internal margin setting for grids to separate sub-plots * Add pixel, pixel_wcs, fraction and linear_wcs FITS import coordinate system modes * Add drop down toolbar button menu to create axis widgets * More efficient widget dependency resolution Bug fixes: * Fix reversed 'broken'-axes * Do not always draw axes above other widgets (fixes problem with key below axis) * Fix use of transparency image when plotting non-square images * Allow lists passed as xrange and yrange to create 2D dataset * Fix FieldBool positioning for plugins * QDP import: fix "no" values when used mixed with numbers * Remove warning of log images with zeros * For embedded mode, always return string for __repr__ * Workaround for windows appearing behind for Mac OS X * Improve property spacing on Mac OS X Features of package: Plotting features: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Vector field plots * Box plots * Polar plots * Ternary plots * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Nested plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * Multiple axes * Axes with steps in axis scale (broken axes) * Axis scales using functional forms * Plotting functions of datasets Input and output: * EPS/PDF/PNG/SVG/EMF export * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV, FITS, NPY/NPZ, QDP, binary and user-plugin importing * Data can be captured from external sources Extending: * Use as a Python module * User defined functions, constants and can import external Python functions * Plugin interface to allow user to write or load code to - import data using new formats - make new datasets, optionally linked to existing datasets - arbitrarily manipulate the document * Scripting interface * Control with DBUS and SAMP Other features: * Data picker * Interactive tutorial * Multithreaded rendering Requirements for source install: Python 2.x (2.6 or greater required) http://www.python.org/ Qt >= 4.4 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.5 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/software/pyqt/ http://www.riverbankcomputing.co.uk/software/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ PyMinuit >= 1.1.2 (optional improved fitting) http://code.google.com/p/pyminuit/ For EMF and better SVG export, PyQt >= 4.6 or better is required, to fix a bug in the C++ wrapping dbus-python, for dbus interface http://dbus.freedesktop.org/doc/dbus-python/ astropy (optional for VO table import) http://www.astropy.org/ SAMPy (optional for SAMP support) http://pypi.python.org/pypi/sampy/ Veusz is Copyright (C) 2003-2013 Jeremy Sanders and contributors. It is licenced under the GPL (version 2 or greater). For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: http://barmag.net/veusz-wiki/ Issues with the current version: * Due to a bug in the Qt XML processing, some MathML elements containing purely white space (e.g. thin space) will give an error. If you enjoy using Veusz, we would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the Git repository at https://github.com/jeremysanders/veusz.git. From x.piter at gmail.com Fri Jul 12 14:13:47 2013 From: x.piter at gmail.com (Petro) Date: Fri, 12 Jul 2013 20:13:47 +0200 Subject: [SciPy-User] fitting with convolution? Message-ID: Hi all, I try to fir a time-resolved dataset with multiple exponents convoluted with a Gaussian instrument response function (IRF). I had a look how it is done in Origin http://wiki.originlab.com/~originla/howto/index.php?title=Tutorial:Fitting_With_Convolution There fft_fft_convolution calculates the circular convolution of an exponent with IRF. I have found a similar function for python here: http://stackoverflow.com/questions/6855169/convolution-computations-in-numpy-scipy This convolution also can be calculated analytically as, for example, in this package: http://www.photonfactory.auckland.ac.nz/uoa/home/photon-factory/pytra def convolutedexp(tau,mu,fwhm,x): d = (fwhm/(2*sqrt(2*log(2)))) return 0.5*exp(-x/tau)*exp((mu+(d**2.)/(2.*tau))/tau)* (1.+erf((x-(mu+(d**2.)/tau))/(sqrt(2.)*d))) def gaussian(mu,fwhm,x): d = (fwhm/(2.*sqrt(2.*log(2.)))) return exp(-((x-mu)**2.)/(2.*d**2.)) My problem is if I compare analytical and circular convolution they do not match: _____source_________ import numpy from scipy.special import erf def cconv(a, b): ''' Computes the circular convolution of the (real-valued) vectors a and b. ''' return fft.ifft(fft.fft(a) * fft.fft(b)).real def convolutedexp(tau,mu,fwhm,x): d = (fwhm/(2*sqrt(2*log(2)))) return 0.5*exp(-x/tau)*exp((mu+(d**2.)/(2.*tau))/tau)*(1.+erf((x-(mu+(d**2.)/tau))/(sqrt(2.)*d))) def gaussian(mu,fwhm,x): d = (fwhm/(2.*sqrt(2.*log(2.)))) return exp(-((x-mu)**2.)/(2.*d**2.)) t = array(linspace(-10.0,1000.0,2040.0))[:-1] mu = 0 fwhm = 4.0 tau = 20.0 uf = gaussian(mu,fwhm,t) vf = exp(-t/tau) figure(figsize=[12,12]) plot(t,uf) #plot(t,vf) uvf1 = cconv(uf,vf) plot(tuv,uvf1/14.5) uvf2 = convolutedexp(tau,mu,fwhm,t) plot(t,uvf2) xlim([-10,20]) ____source_end___ My feeling is that I miss something about convolution? Can anybody give me a hint? Thanks. Petro From tmp50 at ukr.net Fri Jul 12 15:46:12 2013 From: tmp50 at ukr.net (Dmitrey) Date: Fri, 12 Jul 2013 22:46:12 +0300 Subject: [SciPy-User] new free software for knapsack problem Message-ID: <1373658171.847348466.d5wdxm7s@fmst-1.ukr.net> Hi all, FYI new free software for knapsack problem ( http://en.wikipedia.org/wiki/Knapsack_problem ) has been made (written in Python language); it can solve possibly constrained, possibly (with interalg ) nonlinear and multiobjective problems with specifiable accuracy. Along with interalg lots of? MILP ? solvers can be used. See http://openopt.org/KSP for details. Regards, Dmitrey. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.terrel at gmail.com Tue Jul 2 08:20:13 2013 From: andy.terrel at gmail.com (Andy Ray Terrel) Date: Tue, 2 Jul 2013 07:20:13 -0500 Subject: [SciPy-User] [Common IR] Introductions Message-ID: Hello all, I'm emailing everyone who may be interested in this topic. I would love to keep everything on one archival list, since NumFOCUS is a common place for many projects, I'm sending things there. Further emails should drop all folks except numfocus at googlegroups.com , to join the list email numfocus+subscribe at googlegroups.com with the word Subscribe as the subject. Subject ----------- At SciPy2013, we had a discussion about creating a common intermediate representation to support the wide array of code generation activities going on in the Python ecosystem. My summary can be found at: https://github.com/IgnitionProject/ignition/wiki/CodeGenComposability_Scipy2013 It is a wiki, feel free to edit. Further ---------- I will be sending a number of emails to the NumFOCUS list that are different responses from emails around the event. Please forward this email to folks you think would be interested. -- Andy From jreback at yahoo.com Wed Jul 3 05:57:49 2013 From: jreback at yahoo.com (Jeff Reback) Date: Wed, 3 Jul 2013 05:57:49 -0400 Subject: [SciPy-User] Custom array serialization In-Reply-To: References: Message-ID: <73901060-C164-4B17-8846-DCD2CAAAAE3A@yahoo.com> Pandas 0.12 (releasing shorty), will have full-dtype support for JSON serialization/deserialization of DataFrames via a bundled USJON parser see here: http://pandas.pydata.org/pandas-docs/dev/io.html#json On Jul 3, 2013, at 5:21 AM, Robert Kern wrote: > On Wed, Jul 3, 2013 at 4:54 AM, Chris Weisiger wrote: > > > > I'm working on a game project; more specifically, right now I'm working on saving and loading the game. As a result, I need to serialize the game state to a file, and deserialize it later. > > > > To pre-empt some responses, I spent a lot of time thinking about this before starting, and came to the conclusion that pickle and other similar automatic [de]serialization libraries were not suitable for this problem. The sticking point is that these libraries invariably let you put code into the serialized object, which code is then executed when you deserialize it. As a result, if you have the deserialization routine in your code, then you have a security breach. I would rather my users be able to distribute savefiles without worrying that one of them has been sabotaged to do something malicious. > > > > Instead, I'm manually serializing to JSON, and manually deserializing. It's actually working decently well so far. I've hit one minor sticking point though: numpy array serialization. Of course I'm aware of numpy.tostring(), but that doesn't preserve type information. And I don't know of a good way to serialize the type and then deserialize it later. > > Use the .npy format that np.save() uses: > > https://github.com/numpy/numpy/blob/master/numpy/lib/format.py > > The write_array() and read_array() functions are the ones you would use. > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeardmore at gmail.com Fri Jul 5 07:38:43 2013 From: abeardmore at gmail.com (GRBChaser) Date: Fri, 5 Jul 2013 04:38:43 -0700 (PDT) Subject: [SciPy-User] ftol and xtol In-Reply-To: References: <1370472243657-18355.post@n7.nabble.com> <1370491726005-18358.post@n7.nabble.com> <1372107134975-18455.post@n7.nabble.com> Message-ID: <1373024323916-18518.post@n7.nabble.com> On the subject of ftol and xtol, the help for fmin describes these parameters as : xtol : float Relative error in xopt acceptable for convergence. ftol : number Relative error in func(xopt) acceptable for convergence. The use of the word "Relative" here has always implied to me that the convergence tests calculated by fmin refer to fractional changes in parameters or function values. However, I had difficulties trying to minimize a function in which the two parameters defer by many orders or magnitude (e.g. 1 and 1e10) and I've come to the conclusion it is because the large parameter never passes the convergence test. The code in fmin which tests for convergence is : if (max(numpy.ravel(abs(sim[1:]-sim[0]))) <= xtol \ and max(abs(fsim[0]-fsim[1:])) <= ftol): break This does not look like a test for the relative change in parameters or function values to me, but rather a test for their absolute changes. A relative change would surely be something like if (max(numpy.ravel(abs((sim[1:]-sim[0])/sim[0]))) <= xtol \ and max(abs((fsim[0]-fsim[1:])/fsim[0])) <= ftol): break (though care should really be taken for possible divide by zero errors). Either the docstring needs changing so "Relative" is replaced with "Absolute" to make it clear, or the convergence test needs to be changed so it truly is a relative error test. -- View this message in context: http://scipy-user.10969.n7.nabble.com/ftol-and-xtol-tp18355p18518.html Sent from the Scipy-User mailing list archive at Nabble.com. From rahulgarg44 at gmail.com Fri Jul 5 13:09:57 2013 From: rahulgarg44 at gmail.com (Rahul Garg) Date: Fri, 5 Jul 2013 10:09:57 -0700 (PDT) Subject: [SciPy-User] [Common IR] Introductions In-Reply-To: References: Message-ID: <906e450b-1422-4cae-a556-71f80cfa4014@googlegroups.com> Hi everyone. Just wanted to say I am also watching the topic with interest. I have been building a compiler toolkit (with emphasis on toolkit, it is completely reusable for anyone) for CPUs and GPUs myself and has many of the ideas discussed here. I am hoping to make the framework public soon. rahul PhD student McGill University On Tuesday, July 2, 2013 8:20:13 AM UTC-4, Andy Terrel wrote: > > Hello all, > > I'm emailing everyone who may be interested in this topic. I would > love to keep everything on one archival list, since NumFOCUS is a > common place for many projects, I'm sending things there. Further > emails should drop all folks except numf... at googlegroups.com , to > join the list email numfocus+... at googlegroups.com with the > word > Subscribe as the subject. > > Subject > ----------- > > At SciPy2013, we had a discussion about creating a common intermediate > representation to support the wide array of code generation activities > going on in the Python ecosystem. My summary can be found at: > > > https://github.com/IgnitionProject/ignition/wiki/CodeGenComposability_Scipy2013 > > It is a wiki, feel free to edit. > > Further > ---------- > > I will be sending a number of emails to the NumFOCUS list that are > different responses from emails around the event. Please forward this > email to folks you think would be interested. > > -- Andy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulgarg44 at gmail.com Fri Jul 5 13:19:50 2013 From: rahulgarg44 at gmail.com (Rahul Garg) Date: Fri, 5 Jul 2013 10:19:50 -0700 (PDT) Subject: [SciPy-User] [Common IR] Introductions In-Reply-To: <906e450b-1422-4cae-a556-71f80cfa4014@googlegroups.com> References: <906e450b-1422-4cae-a556-71f80cfa4014@googlegroups.com> Message-ID: Also, I wanted to suggest that we also keep in mind some use-cases (i.e. real applications) in mind. One of the issues I have is that while I may think up any number of fancy constructs for GPUs, I don't often find that many applications that can benefit from GPUs due to issues such as data transfer overhead or insufficient parallelism. Having some real applications, written in Python (not a simple wrapper around a C library), as expected use cases for the IR will be very helpful. rahul On Friday, July 5, 2013 1:09:57 PM UTC-4, Rahul Garg wrote: > > Hi everyone. > > Just wanted to say I am also watching the topic with interest. I have > been building a compiler toolkit (with emphasis on toolkit, it is > completely reusable for anyone) for CPUs and GPUs myself and has many of > the ideas discussed here. I am hoping to make the framework public soon. > > rahul > PhD student > McGill University > > On Tuesday, July 2, 2013 8:20:13 AM UTC-4, Andy Terrel wrote: >> >> Hello all, >> >> I'm emailing everyone who may be interested in this topic. I would >> love to keep everything on one archival list, since NumFOCUS is a >> common place for many projects, I'm sending things there. Further >> emails should drop all folks except numf... at googlegroups.com , to >> join the list email numfocus+... at googlegroups.com with the word >> Subscribe as the subject. >> >> Subject >> ----------- >> >> At SciPy2013, we had a discussion about creating a common intermediate >> representation to support the wide array of code generation activities >> going on in the Python ecosystem. My summary can be found at: >> >> >> https://github.com/IgnitionProject/ignition/wiki/CodeGenComposability_Scipy2013 >> >> It is a wiki, feel free to edit. >> >> Further >> ---------- >> >> I will be sending a number of emails to the NumFOCUS list that are >> different responses from emails around the event. Please forward this >> email to folks you think would be interested. >> >> -- Andy >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scopatz at gmail.com Fri Jul 5 14:34:50 2013 From: scopatz at gmail.com (Anthony Scopatz) Date: Fri, 5 Jul 2013 13:34:50 -0500 Subject: [SciPy-User] [Common IR] Introductions In-Reply-To: References: <906e450b-1422-4cae-a556-71f80cfa4014@googlegroups.com> Message-ID: > not a simple wrapper around a C library Damn those simple wrappers! :) Be Well Anthony On Fri, Jul 5, 2013 at 12:19 PM, Rahul Garg wrote: > Also, I wanted to suggest that we also keep in mind some use-cases (i.e. > real applications) in mind. One of the issues I have is that while I may > think up any number of fancy constructs for GPUs, I don't often find that > many applications that can benefit from GPUs due to issues such as data > transfer overhead or insufficient parallelism. Having some real > applications, written in Python (not a simple wrapper around a C library), > as expected use cases for the IR will be very helpful. > > rahul > > > On Friday, July 5, 2013 1:09:57 PM UTC-4, Rahul Garg wrote: >> >> Hi everyone. >> >> Just wanted to say I am also watching the topic with interest. I have >> been building a compiler toolkit (with emphasis on toolkit, it is >> completely reusable for anyone) for CPUs and GPUs myself and has many of >> the ideas discussed here. I am hoping to make the framework public soon. >> >> rahul >> PhD student >> McGill University >> >> On Tuesday, July 2, 2013 8:20:13 AM UTC-4, Andy Terrel wrote: >>> >>> Hello all, >>> >>> I'm emailing everyone who may be interested in this topic. I would >>> love to keep everything on one archival list, since NumFOCUS is a >>> common place for many projects, I'm sending things there. Further >>> emails should drop all folks except numf... at googlegroups.com , to >>> join the list email numfocus+... at googlegroups.com with the word >>> Subscribe as the subject. >>> >>> Subject >>> ----------- >>> >>> At SciPy2013, we had a discussion about creating a common intermediate >>> representation to support the wide array of code generation activities >>> going on in the Python ecosystem. My summary can be found at: >>> >>> https://github.com/**IgnitionProject/ignition/wiki/** >>> CodeGenComposability_Scipy2013 >>> >>> It is a wiki, feel free to edit. >>> >>> Further >>> ---------- >>> >>> I will be sending a number of emails to the NumFOCUS list that are >>> different responses from emails around the event. Please forward this >>> email to folks you think would be interested. >>> >>> -- Andy >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.evans at yale.edu Fri Jul 5 16:46:33 2013 From: b.evans at yale.edu (Benjamin Evans) Date: Fri, 5 Jul 2013 16:46:33 -0400 Subject: [SciPy-User] scipy 0.11.0 to 0.12.0 changes scipy.interpolate.interp1d, breaks constantly updated interpolator Message-ID: Hello all, I have been playing around with a package that uses a linear scipy.interpolate.interp1d to create a history function for the ode solver in scipy, described here . The relevant bit of code goes something like def update(self, ti, Y): """ Add one new (ti, yi) to the interpolator """ self.itpr.x = np.hstack([self.itpr.x, [ti]]) yi = np.array([Y]).T self.itpr.y = np.hstack([self.itpr.y, yi]) self.itpr.fill_value = Y Where "self.itpr" is initialized in __init__: def __init__(self, g, tc=0): """ g(t) = expression of Y(t) for t. Is it safe and/or sane to just update _y as outlined above as well? Is there a simpler, more pythonic way to address this that would hopefully be more robust to future updates in scipy? Again, in v 0.11 everything works well and expected results are produced, and in v 0.12 I get an IndexError when _y is referencedas it isn't updated in my function while y itself is. Any help/pointers would be appreciated! Thanks! Ben Evans -------------- next part -------------- An HTML attachment was scrubbed... URL: From Phillip.M.Feldman at gmail.com Sat Jul 6 03:32:05 2013 From: Phillip.M.Feldman at gmail.com (pfeldman) Date: Sat, 6 Jul 2013 00:32:05 -0700 (PDT) Subject: [SciPy-User] ftol and xtol In-Reply-To: <1373024323916-18518.post@n7.nabble.com> References: <1370472243657-18355.post@n7.nabble.com> <1370491726005-18358.post@n7.nabble.com> <1372107134975-18455.post@n7.nabble.com> <1373024323916-18518.post@n7.nabble.com> Message-ID: If one uses relative error, it is unclear how to handle the situation where the minimum is at zero. So, absolute error seems more practical. On Fri, Jul 5, 2013 at 4:38 AM, GRBChaser [via Scipy-User] < ml-node+s10969n18518h56 at n7.nabble.com> wrote: > On the subject of ftol and xtol, the help for fmin describes these > parameters as : > > xtol : float > Relative error in xopt acceptable for convergence. > ftol : number > Relative error in func(xopt) acceptable for convergence. > > > The use of the word "Relative" here has always implied to me that the > convergence tests calculated by fmin refer to fractional changes in > parameters or function values. However, I had difficulties trying to > minimize a function in which the two parameters defer by many orders or > magnitude (e.g. 1 and 1e10) and I've come to the conclusion it is because > the large parameter never passes the convergence test. > > The code in fmin which tests for convergence is : > > if (max(numpy.ravel(abs(sim[1:]-sim[0]))) <= xtol \ > and max(abs(fsim[0]-fsim[1:])) <= ftol): > break > > This does not look like a test for the relative change in parameters or > function values to me, but rather a test for their absolute changes. A > relative change would surely be something like > > if (max(numpy.ravel(abs((sim[1:]-sim[0])/sim[0]))) <= xtol \ > and max(abs((fsim[0]-fsim[1:])/fsim[0])) <= ftol): > break > > (though care should really be taken for possible divide by zero errors). > > > Either the docstring needs changing so "Relative" is replaced with > "Absolute" to make it clear, or the convergence test needs to be changed so > it truly is a relative error test. > > > > ------------------------------ > If you reply to this email, your message will be added to the discussion > below: > http://scipy-user.10969.n7.nabble.com/ftol-and-xtol-tp18355p18518.html > To unsubscribe from ftol and xtol, click here > . > NAML > -- View this message in context: http://scipy-user.10969.n7.nabble.com/ftol-and-xtol-tp18355p18520.html Sent from the Scipy-User mailing list archive at Nabble.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anubhab91 at gmail.com Sat Jul 6 15:13:07 2013 From: anubhab91 at gmail.com (anubhab91) Date: Sat, 6 Jul 2013 12:13:07 -0700 (PDT) Subject: [SciPy-User] Combinatorics in Scipy Message-ID: <1373137987587-18521.post@n7.nabble.com> Hi, Can anybody please tell me, if there are any functions to calculate various combinatorics functions like permutation, combination, Bernoulli numbers etc? Is someone doing on these topics? Regards. -- View this message in context: http://scipy-user.10969.n7.nabble.com/Combinatorics-in-Scipy-tp18521.html Sent from the Scipy-User mailing list archive at Nabble.com. From mutantturkey at gmail.com Sun Jul 14 09:16:40 2013 From: mutantturkey at gmail.com (Calvin Morrison) Date: Sun, 14 Jul 2013 09:16:40 -0400 Subject: [SciPy-User] Combinatorics in Scipy In-Reply-To: <1373137987587-18521.post@n7.nabble.com> References: <1373137987587-18521.post@n7.nabble.com> Message-ID: Itertools might cover this in basic areas On Jul 13, 2013 6:32 PM, "anubhab91" wrote: > Hi, > Can anybody please tell me, if there are any functions to calculate various > combinatorics functions like permutation, combination, Bernoulli numbers > etc? Is someone doing on these topics? > > Regards. > > > > -- > View this message in context: > http://scipy-user.10969.n7.nabble.com/Combinatorics-in-Scipy-tp18521.html > Sent from the Scipy-User mailing list archive at Nabble.com. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielcarlminer at gmail.com Tue Jul 16 05:45:17 2013 From: danielcarlminer at gmail.com (Daniel Miner) Date: Tue, 16 Jul 2013 11:45:17 +0200 Subject: [SciPy-User] Color Lists in Dendrograms / Hierarchical Clustering Message-ID: Hi everyone, I'm trying to use hierarchical clustering to tease out some structure in data that I already know exists as a sort of test case to (hopefully) show that it can be reliably done for the type of data I'm concerned with. To this end, knowing that the full call for dendrogram generation is: scipy.cluster.hioerarchy.dendrogram(Z, p=30, truncate_mode=None, color_threshold=None, get_leaves=True, orientation='top',labels=None, count_sort=False, distance_sort=False, show_leaf_counts=True, no_plot=False, no_labels=False, color_list=None,leaf_font_size=None, leaf_rotation=None, leaf_label_func=None, no_leaves=False, show_contracted=False,link_color_func=None) I use one of the linkage algorithms to generate the linkage, manually create a list of colors "c_list" as a list with a color corresponding to each known category of original data for each data point - i.e. if I have data [1,2,3] and know that 1 and 2 come from the same category but 3 is different, I make a list ['r','r','g'] - and try to use it as follows: import matplotlib.pyplot as pt import scipy.cluster.hierarchy as sc [import other stuff] [load data DAT, generate color list] lw = sc.ward(DAT) dw = sc.dendrogram(lw,color_list='c_list') pt.show() However, the colors seem to do nothing. I've tried listing them both numerically (i.e. [1,2,3]) and as characters (i.e. ['r','g','b']), and have tried making the call with c_list in single quotes as shown and with no quotes at all. There is no documentation at http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.htmlregarding an expected format for the color list, and without this color labeling, I can't check to see if the clustering is doing what I hope it does, as there are many data points and it would be prohibitively difficult to read though each of the tiny index labels at the bottom of the default dendrogram plot. I really have no idea how to proceed in order to make this work and am hoping that someone here can provide some advice. Thanks. Best regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrocher at enthought.com Wed Jul 17 18:43:47 2013 From: jrocher at enthought.com (Jonathan Rocher) Date: Wed, 17 Jul 2013 17:43:47 -0500 Subject: [SciPy-User] [ANN] 4th Python Symposium at AMS2014 Message-ID: [Apologies for the cross-post] Dear all, If you work with Python around themes like big data, climate, meteorological or oceanic science, and/or GIS, you should come present at the 4th Python Symposium, as part of the American Meteorological Society conference in Atlanta in Feb 2014: http://annual.ametsoc.org/2014/index.cfm/programs-and-events/conferences-and-symposia/fourth-symposium-on-advances-in-modeling-and-analysis-using-python/ The *abstract deadline is Aug 1st*! Jonathan -- Jonathan Rocher, PhD Scientific software developer SciPy2013 conference co-chair Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From caraciol at gmail.com Thu Jul 18 17:19:40 2013 From: caraciol at gmail.com (Marcel Caraciolo) Date: Thu, 18 Jul 2013 18:19:40 -0300 Subject: [SciPy-User] =?iso-8859-1?q?Fwd=3A_Submiss=E3o_de_palestras_cient?= =?iso-8859-1?q?=EDficas_na_Pythonbrasil_2014?= In-Reply-To: References: Message-ID: Hi all, My name is Marcel and I am from Brazil. I'd like to invite you all to submit talks or projects related to scientific computing with Python at our IX Python Brazilian Conference (PythonBrasil). It is one of the largest events at Brazil related to development and it happens once a year. This year we will have a special track for science. So if you have any work with python, libraries, or even representants interested to submit a talk and meet the Brazilian churrasco food :) You're all invited! :) To submit, follow the links: http://2013.pythonbrasil.org.br/pythonbrasil http://2013.pythonbrasil.org.br/pythonbrasil/sobre-o-evento/noticias/chamada-de-trabalhos Submissions are open until July'26! Regards, -- Marcel Pinheiro Caraciolo M.S.C. Candidate at CIN/UFPE http://www.mobideia.com http://aimotion.blogspot.com/ -- Marcel Pinheiro Caraciolo M.S.C. Candidate at CIN/UFPE http://www.mobideia.com http://aimotion.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Fri Jul 19 09:57:32 2013 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Fri, 19 Jul 2013 06:57:32 -0700 (PDT) Subject: [SciPy-User] fitting with convolution? In-Reply-To: References: Message-ID: <1374242252.1533.YahooMailNeo@web163906.mail.gq1.yahoo.com> Hi Petro, when I run the code I get very similar, although not identical curves, which are roughly what you'd expect. Your exponential kernel that you're convolving with is not band limited, but FFTs (and hence FFT based convolution) are band-limited to whatever your effective nyquist frequency is (strictly speaking out of band frequencies might be aliased into the pass band - but in your case you can just think of the high frequency components as being discarded). This probably applies generally to any discrete way of doing convolution, when compared to an analytical solution. Put simply, you can never sample fine enough ?to reproduce all the detail of the initial spikey bit of your exponential (if you increase your sampling frequency you'll get a better match, but you'll never quite get there). If you are referring to the factor of 14.5, it is a result of fft normalisation, and should more accurately be sqrt(N)/pi. hope this helps, David ________________________________ From: Petro To: scipy-user at scipy.org Sent: Friday, 12 July 2013 2:13 PM Subject: [SciPy-User] fitting with convolution? Hi all, I try to fir a time-resolved dataset with multiple exponents convoluted with a Gaussian instrument response function (IRF). I had a look how it is done in Origin http://wiki.originlab.com/~originla/howto/index.php?title=Tutorial:Fitting_With_Convolution There fft_fft_convolution calculates the circular convolution of an exponent with IRF. I have found a similar function for python here: http://stackoverflow.com/questions/6855169/convolution-computations-in-numpy-scipy This convolution also can be calculated analytically as, for example, in this package: http://www.photonfactory.auckland.ac.nz/uoa/home/photon-factory/pytra def convolutedexp(tau,mu,fwhm,x): ? ? d = (fwhm/(2*sqrt(2*log(2)))) ? ? return 0.5*exp(-x/tau)*exp((mu+(d**2.)/(2.*tau))/tau)* (1.+erf((x-(mu+(d**2.)/tau))/(sqrt(2.)*d))) def gaussian(mu,fwhm,x): ??? d = (fwhm/(2.*sqrt(2.*log(2.)))) ??? return exp(-((x-mu)**2.)/(2.*d**2.)) My problem is if I compare analytical and circular convolution they do not match: _____source_________ import numpy from scipy.special import erf def cconv(a, b): ? ? ''' ? ? Computes the circular convolution of the (real-valued) vectors a and b. ? ? ''' ? ? return fft.ifft(fft.fft(a) * fft.fft(b)).real def convolutedexp(tau,mu,fwhm,x): ? ? d = (fwhm/(2*sqrt(2*log(2)))) ? ? return 0.5*exp(-x/tau)*exp((mu+(d**2.)/(2.*tau))/tau)*(1.+erf((x-(mu+(d**2.)/tau))/(sqrt(2.)*d))) def gaussian(mu,fwhm,x): ??? d = (fwhm/(2.*sqrt(2.*log(2.)))) ??? return exp(-((x-mu)**2.)/(2.*d**2.)) t = array(linspace(-10.0,1000.0,2040.0))[:-1] mu = 0 fwhm = 4.0 tau = 20.0 uf = gaussian(mu,fwhm,t) vf = exp(-t/tau) figure(figsize=[12,12]) plot(t,uf) #plot(t,vf) uvf1 = cconv(uf,vf) plot(tuv,uvf1/14.5) uvf2 = convolutedexp(tau,mu,fwhm,t) plot(t,uvf2) xlim([-10,20]) ____source_end___ My feeling is that I miss something about convolution? Can anybody give me a hint? Thanks. Petro _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jul 20 10:59:42 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 20 Jul 2013 16:59:42 +0200 Subject: [SciPy-User] scipy 0.11.0 to 0.12.0 changes scipy.interpolate.interp1d, breaks constantly updated interpolator In-Reply-To: References: Message-ID: On Fri, Jul 5, 2013 at 10:46 PM, Benjamin Evans wrote: > Hello all, > > I have been playing around with a package that uses a linear > scipy.interpolate.interp1d to create a history function for the ode solver > in scipy, described here > . > > The relevant bit of code goes something like > > def update(self, ti, Y): > """ Add one new (ti, yi) to the interpolator """ > self.itpr.x = np.hstack([self.itpr.x, [ti]]) > yi = np.array([Y]).T > self.itpr.y = np.hstack([self.itpr.y, yi]) > self.itpr.fill_value = Y > > Where "self.itpr" is initialized in __init__: > > def __init__(self, g, tc=0): > """ g(t) = expression of Y(t) for t > self.g = g > self.tc = tc > # We must fill the interpolator with 2 points minimum > self.itpr = scipy.interpolate.interp1d( > np.array([tc-1, tc]), # X > np.array([self.g(tc), self.g(tc)]).T, # Y > kind='linear', bounds_error=False, > fill_value = self.g(tc)) > > Where g is some function that returns an array of values that are > solutions to a set of differential equations and tc is the current time. > > This seems nice to me because a new interpolator object doesn't have to be > re-created every time I want to update the ranges of values (which happens > at each explicit time step during a simulation). This method of updating > the interpolator works well under scipy v 0.11.0. However, after updating > to v 0.12.0 I ran into issues. I see that the new interpolator now > includes an array _y. > Is it safe and/or sane to just update _y as outlined above as well? Is > there a simpler, more pythonic way to address this that would hopefully be > more robust to future updates in scipy? Again, in v 0.11 everything works > well and expected results are produced, and in v 0.12 I get an IndexError when > _y is referencedas it isn't updated in my function while y itself is. > > Any help/pointers would be appreciated! > There's a ticket with discussion here: https://github.com/scipy/scipy/issues/2621 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Jul 20 11:04:17 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 20 Jul 2013 18:04:17 +0300 Subject: [SciPy-User] scipy 0.11.0 to 0.12.0 changes scipy.interpolate.interp1d, breaks constantly updated interpolator In-Reply-To: References: Message-ID: 05.07.2013 23:46, Benjamin Evans kirjoitti: [clip] > Is it safe and/or sane to just update _y as outlined above as well? Is > there a simpler, more pythonic way to address this that would hopefully > be more robust to future updates in scipy? Again, in v 0.11 everything > works well and expected results are produced, and in v 0.12 I get an > IndexError when _y is referenced > > as it isn't updated in my function while y itself is. The short answer is that interp1d does not currently support what you are trying to do. Since it supports also spline interpolants, it is in general not possible to update the interpolant online. -- Pauli Virtanen From pav at iki.fi Sat Jul 20 11:17:03 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 20 Jul 2013 18:17:03 +0300 Subject: [SciPy-User] scipy 0.11.0 to 0.12.0 changes scipy.interpolate.interp1d, breaks constantly updated interpolator In-Reply-To: References: Message-ID: 20.07.2013 18:04, Pauli Virtanen kirjoitti: [clip] > The short answer is that interp1d does not currently support what you > are trying to do. Since it supports also spline interpolants, it is in > general not possible to update the interpolant online. To clarify: the easiest way is to recompute the spline coefficients when points are added, but this the same cost as reconstructing the whole interpolant so it's not really an on-line operation. It's probably possible to extend B-splines cheaply when points are added, but implementing this takes some work. If someone wants to take on adding an `add_points` method to the interpolator, that would be useful. For this particular use case, it can be useful even if it works only for linear interpolants (and raises an exception for the spline ones). -- Pauli Virtanen From mail.till at gmx.de Sun Jul 21 09:10:32 2013 From: mail.till at gmx.de (Till Stensitzki) Date: Sun, 21 Jul 2013 13:10:32 +0000 (UTC) Subject: [SciPy-User] fitting with convolution? References: <1374242252.1533.YahooMailNeo@web163906.mail.gq1.yahoo.com> Message-ID: David Baddeley yahoo.com.au> writes: > > > Hi Petro, > > when I run the code I get very similar, although not identical curves, which are roughly what you'd expect. Your exponential kernel that you're convolving with is not band limited, but FFTs (and hence FFT based convolution) are band-limited to whatever your effective nyquist frequency is (strictly speaking out of band frequencies might be aliased into the pass band - but in your case you can just think of the high frequency components as being discarded). This > probably applies generally to any discrete way of doing convolution, when compared to an analytical solution. Put simply, you can never sample fine enough ?to reproduce all the detail of the initial spikey bit of your exponential (if you increase your sampling frequency you'll get a better match, but you'll never quite get there). If you are referring to the factor of 14.5, it is a result of fft normalisation, and should more accurately be sqrt(N)/pi. > > hope this helps, > David I this case the problem is much simpler, the analytical solution is not circular, while the FFT convolution is. With the right zero-padding it is possible to get identical results. But my testing did show this is slower than just using the analytical solution. greetings Till From parrenin.ujf at gmail.com Mon Jul 29 09:34:16 2013 From: parrenin.ujf at gmail.com (=?ISO-8859-1?Q?Fr=E9d=E9ric_Parrenin?=) Date: Mon, 29 Jul 2013 15:34:16 +0200 Subject: [SciPy-User] RFE: dictionnary of figures in Matplotlib Message-ID: Currently, one call figures with their number in matplotlib. If you have a code which draw a lot a different figures at different places with some of them being optional, this is not very convenient. One convenient way to call figure would be to use a dictionnary of figures. Of course I could create some wrapper around matplotlib.figure function but it would be far more convenient if such feature would be standard in matplotlib. Is there any plan to implement such a feature? Best regards, Fr?d?ric Parrenin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jul 29 09:35:41 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 29 Jul 2013 14:35:41 +0100 Subject: [SciPy-User] RFE: dictionnary of figures in Matplotlib In-Reply-To: References: Message-ID: The matplotlib mailing list can be found here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users On Mon, Jul 29, 2013 at 2:34 PM, Fr?d?ric Parrenin wrote: > Currently, one call figures with their number in matplotlib. > If you have a code which draw a lot a different figures at different > places with some of them being optional, this is not very convenient. > > One convenient way to call figure would be to use a dictionnary of figures. > Of course I could create some wrapper around matplotlib.figure function > but it would be far more convenient if such feature would be standard in > matplotlib. > > Is there any plan to implement such a feature? > > Best regards, > > Fr?d?ric Parrenin > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at xgm.de Wed Jul 31 09:25:56 2013 From: mailinglists at xgm.de (Florian Lindner) Date: Wed, 31 Jul 2013 15:25:56 +0200 Subject: [SciPy-User] Read file with comma decimal separator Message-ID: <4930843.uyAzFXlt8I@horus> Hello, I have a file that used comma as a decimal separator. How can I read a file like that using loadtxt or genfromtxt ? Thanks, Florian From mailinglists at xgm.de Wed Jul 31 09:31:07 2013 From: mailinglists at xgm.de (Florian Lindner) Date: Wed, 31 Jul 2013 15:31:07 +0200 Subject: [SciPy-User] Read file with comma decimal separator In-Reply-To: <4930843.uyAzFXlt8I@horus> References: <4930843.uyAzFXlt8I@horus> Message-ID: <2401670.QDN292GOlR@horus> Am Mittwoch, 31. Juli 2013, 15:25:56 schrieb Florian Lindner: > Hello, > > I have a file that used comma as a decimal separator. How can I read a file > like that using loadtxt or genfromtxt ? Since I'm not sure how many columns the file will have, I tried: conv = {} for i in range(1000): conv[i] = lambda a: a.replace(",", ".") data = np.loadtxt(f, skiprows = 2, converters = conv) but: File "/usr/lib/python2.7/site-packages/numpy/lib/npyio.py", line 817, in loadtxt converters[i] = conv IndexError: list assignment index out of range Regards, Florian From blattnem at gmail.com Wed Jul 31 09:37:38 2013 From: blattnem at gmail.com (Marcel Blattner) Date: Wed, 31 Jul 2013 15:37:38 +0200 Subject: [SciPy-User] Read file with comma decimal separator In-Reply-To: <2401670.QDN292GOlR@horus> References: <4930843.uyAzFXlt8I@horus> <2401670.QDN292GOlR@horus> Message-ID: <8222E59D-E8B5-471A-B7C7-19A70FAA211B@gmail.com> .... Use the 'delimiter' key in the genfromtxt routine. Like in the docu >>> s = StringIO("1,1.3,abcde") >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ... ('mystring','S5')], delimiter=",") >>> data array((1, 1.3, 'abcde'), dtype=[('myint', ' wrote: > Am Mittwoch, 31. Juli 2013, 15:25:56 schrieb Florian Lindner: >> Hello, >> >> I have a file that used comma as a decimal separator. How can I read a file >> like that using loadtxt or genfromtxt ? > > Since I'm not sure how many columns the file will have, I tried: > > conv = {} > for i in range(1000): > conv[i] = lambda a: a.replace(",", ".") > > data = np.loadtxt(f, skiprows = 2, converters = conv) > > but: > > File "/usr/lib/python2.7/site-packages/numpy/lib/npyio.py", line 817, in > loadtxt > converters[i] = conv > IndexError: list assignment index out of range > > Regards, > Florian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Jul 31 09:40:43 2013 From: pgmdevlist at gmail.com (Pierre Gerard-Marchant) Date: Wed, 31 Jul 2013 15:40:43 +0200 Subject: [SciPy-User] Read file with comma decimal separator In-Reply-To: <2401670.QDN292GOlR@horus> References: <4930843.uyAzFXlt8I@horus> <2401670.QDN292GOlR@horus> Message-ID: <77EAFA3B-5560-4EDF-90C1-F24936AA344E@gmail.com> On Jul 31, 2013, at 15:31 , Florian Lindner wrote: > Am Mittwoch, 31. Juli 2013, 15:25:56 schrieb Florian Lindner: >> Hello, >> >> I have a file that used comma as a decimal separator. How can I read a file >> like that using loadtxt or genfromtxt ? A quick and dirty approach would be to create a generator that would parse your initial input and replace the ',' by '.' on each line. You'd just have to feed the generator to `genfromtxt`: >>>X = StringIO('1,1\t1,2\t1,3\n2,1\t2,2\t,2,3') >>>replaced = (line.replace(",", ".") for line in X) >>>np.genfromtxt(replaced, delimiter="\t") Of course, that'd work only if you don't intend to use "," as your delimiter, in which case you're out of luck. From gary.ruben at gmail.com Wed Jul 31 10:27:16 2013 From: gary.ruben at gmail.com (gary ruben) Date: Thu, 1 Aug 2013 00:27:16 +1000 Subject: [SciPy-User] Read file with comma decimal separator In-Reply-To: <77EAFA3B-5560-4EDF-90C1-F24936AA344E@gmail.com> References: <4930843.uyAzFXlt8I@horus> <2401670.QDN292GOlR@horus> <77EAFA3B-5560-4EDF-90C1-F24936AA344E@gmail.com> Message-ID: You could preread the input using StringIO import StringIO import numpy as np s = open('test.txt').read().replace(',','.') data = np.loadtxt(StringIO.StringIO(s)) print data On 31 July 2013 23:40, Pierre Gerard-Marchant wrote: > > On Jul 31, 2013, at 15:31 , Florian Lindner wrote: > > > Am Mittwoch, 31. Juli 2013, 15:25:56 schrieb Florian Lindner: > >> Hello, > >> > >> I have a file that used comma as a decimal separator. How can I read a > file > >> like that using loadtxt or genfromtxt ? > > A quick and dirty approach would be to create a generator that would parse > your initial input and replace the ',' by '.' on each line. You'd just have > to feed the generator to `genfromtxt`: > > >>>X = StringIO('1,1\t1,2\t1,3\n2,1\t2,2\t,2,3') > >>>replaced = (line.replace(",", ".") for line in X) > >>>np.genfromtxt(replaced, delimiter="\t") > > Of course, that'd work only if you don't intend to use "," as your > delimiter, in which case you're out of luck. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: