From jdhardy at gmail.com Sun Jun 1 09:05:10 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Sun, 1 Jun 2014 08:05:10 +0100 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Sat, May 31, 2014 at 3:23 PM, Doug Blank wrote: > On Fri, May 30, 2014 at 5:22 PM, Steve Baer wrote: > >> I would definitely be interested in helping, but don't exactly know where >> to start. We have a lot of users who would love to get access to numpy on >> our OSX and 64bit windows versions of our product. This is only going to >> become a bigger problem in the future since we will probably only have a >> 64bit version for Window in the next version of Rhino. >> > > I agree. Let's start a serious discussion about how to solve the lack of > numpy in IronPython. > > We could look into using ctypes (IronClad?) and wrap what already exists. > > We could look into a cross-platform DLL drop-in replacement. > > Between speed and compatibility, initially I'm most interested in > compatibility. But speed should be a long term goal. > > We could write a pure-python prototype initially, and slowly move that to > C#, or another CLR language. That would be useful for all non-C-based > Python implementations, and would probably be quickest to write and test. > > A related note: Python3 just added a new matrix multiplication operator > [1]. Hope to see more numpy-related functionality in standard Python in the > future. > > Other ideas? Where to start? > I would start by asking the NumPy team what the best option is, and seeing what the NumPyPy team are doing - the more work that can be shared, the better. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.blank at gmail.com Sun Jun 1 15:29:07 2014 From: doug.blank at gmail.com (Doug Blank) Date: Sun, 1 Jun 2014 09:29:07 -0400 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Sun, Jun 1, 2014 at 3:05 AM, Jeff Hardy wrote: > On Sat, May 31, 2014 at 3:23 PM, Doug Blank wrote: > >> On Fri, May 30, 2014 at 5:22 PM, Steve Baer wrote: >> >>> I would definitely be interested in helping, but don't exactly know >>> where to start. We have a lot of users who would love to get access to >>> numpy on our OSX and 64bit windows versions of our product. This is only >>> going to become a bigger problem in the future since we will probably only >>> have a 64bit version for Window in the next version of Rhino. >>> >> >> I agree. Let's start a serious discussion about how to solve the lack of >> numpy in IronPython. >> >> We could look into using ctypes (IronClad?) and wrap what already exists. >> >> We could look into a cross-platform DLL drop-in replacement. >> >> Between speed and compatibility, initially I'm most interested in >> compatibility. But speed should be a long term goal. >> >> We could write a pure-python prototype initially, and slowly move that to >> C#, or another CLR language. That would be useful for all non-C-based >> Python implementations, and would probably be quickest to write and test. >> >> A related note: Python3 just added a new matrix multiplication operator >> [1]. Hope to see more numpy-related functionality in standard Python in the >> future. >> >> Other ideas? Where to start? >> > > I would start by asking the NumPy team what the best option is, and seeing > what the NumPyPy team are doing - the more work that can be shared, the > better. > That is a good idea, and I started to try to define what we might mean by "best option." Almost all of the work I have seen coming from most related projects would define best in terms of speed. However, this comes at a pretty big cost in terms of maintenance---for example, having wrapped C libraries compiled for each platform, for each bus size (32, 64). In poking around, I think the Jython needs might be closest to ours. Looks like they have two starts: a pure-Java library based on the older Numeric [1], and a native interface for talking directly to C [2]. The native interface is a long term project, and is not yet to the point of working with numpy. As a quick test, I tried to IKVM-convert the pure Java jar file into a DLL. I think that could work (eventually) but would require bringing a lot of Jython, and would probably always be a little wonky. In addition, that API is the older Numeric. A third option, as mentioned by Ivan, is to have a bridge to CPython interfacing with numpy. But that would make CPython a dependency---not really something that many IronPython users would appreciate. Of course, if one is looking for doing numeric operations, you could use a different .NET/Mono math library. But what I am interested in is the numpy API, so that other code will be usable in IronPython. To me, it looks like the best bet at this point in time is to write our own. The next question is: write it in pure Python, or C#/F#/etc. If we write it in pure Python, there is the chance that some Jython developers (and maybe other Python implementation people) might be interested in helping. It would be slow, but we could rely on Python for handling type operations (float times int) to do the right thing. It would also be immediately usable by future CPython users. It could be the case that a future Python implementation could do some JITting to make it run fast enough. (A pure Python version might also be useful for educational uses, as I presume it would be more easily understood by Python students). If we write it as a CLR library, it would be as fast as managed code would allow, and be available for other CLR languages (like F#, Boo, etc). But we would probably be alone in developing it, as it is mostly of interest to Python users using numpy and the CLR. I guess I am leaning towards a pure Python implementation of the latest numpy API. Perhaps followed up by a DLL version. -Doug [1] - https://bitbucket.org/zornslemon/jnumeric-ra/overview [2] - http://jyni.org/ > > - Jeff > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at mcneel.com Mon Jun 2 19:13:15 2014 From: steve at mcneel.com (Steve Baer) Date: Mon, 2 Jun 2014 10:13:15 -0700 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: We should also check with the Enthought guys to see what they did and if they are willing to share. -Steve On Sun, Jun 1, 2014 at 6:29 AM, Doug Blank wrote: > On Sun, Jun 1, 2014 at 3:05 AM, Jeff Hardy wrote: > >> On Sat, May 31, 2014 at 3:23 PM, Doug Blank wrote: >> >>> On Fri, May 30, 2014 at 5:22 PM, Steve Baer wrote: >>> >>>> I would definitely be interested in helping, but don't exactly know >>>> where to start. We have a lot of users who would love to get access to >>>> numpy on our OSX and 64bit windows versions of our product. This is only >>>> going to become a bigger problem in the future since we will probably only >>>> have a 64bit version for Window in the next version of Rhino. >>>> >>> >>> I agree. Let's start a serious discussion about how to solve the lack of >>> numpy in IronPython. >>> >>> We could look into using ctypes (IronClad?) and wrap what already exists. >>> >>> We could look into a cross-platform DLL drop-in replacement. >>> >>> Between speed and compatibility, initially I'm most interested in >>> compatibility. But speed should be a long term goal. >>> >>> We could write a pure-python prototype initially, and slowly move that >>> to C#, or another CLR language. That would be useful for all non-C-based >>> Python implementations, and would probably be quickest to write and test. >>> >>> A related note: Python3 just added a new matrix multiplication operator >>> [1]. Hope to see more numpy-related functionality in standard Python in the >>> future. >>> >>> Other ideas? Where to start? >>> >> >> I would start by asking the NumPy team what the best option is, and >> seeing what the NumPyPy team are doing - the more work that can be shared, >> the better. >> > > That is a good idea, and I started to try to define what we might mean by > "best option." Almost all of the work I have seen coming from most related > projects would define best in terms of speed. However, this comes at a > pretty big cost in terms of maintenance---for example, having wrapped C > libraries compiled for each platform, for each bus size (32, 64). > > In poking around, I think the Jython needs might be closest to ours. Looks > like they have two starts: a pure-Java library based on the older Numeric > [1], and a native interface for talking directly to C [2]. The native > interface is a long term project, and is not yet to the point of working > with numpy. As a quick test, I tried to IKVM-convert the pure Java jar file > into a DLL. I think that could work (eventually) but would require bringing > a lot of Jython, and would probably always be a little wonky. In addition, > that API is the older Numeric. > > A third option, as mentioned by Ivan, is to have a bridge to CPython > interfacing with numpy. But that would make CPython a dependency---not > really something that many IronPython users would appreciate. > > Of course, if one is looking for doing numeric operations, you could use a > different .NET/Mono math library. But what I am interested in is the numpy > API, so that other code will be usable in IronPython. > > To me, it looks like the best bet at this point in time is to write our > own. The next question is: write it in pure Python, or C#/F#/etc. > > If we write it in pure Python, there is the chance that some Jython > developers (and maybe other Python implementation people) might be > interested in helping. It would be slow, but we could rely on Python for > handling type operations (float times int) to do the right thing. It would > also be immediately usable by future CPython users. It could be the case > that a future Python implementation could do some JITting to make it run > fast enough. (A pure Python version might also be useful for educational > uses, as I presume it would be more easily understood by Python students). > > If we write it as a CLR library, it would be as fast as managed code would > allow, and be available for other CLR languages (like F#, Boo, etc). But we > would probably be alone in developing it, as it is mostly of interest to > Python users using numpy and the CLR. > > I guess I am leaning towards a pure Python implementation of the latest > numpy API. Perhaps followed up by a DLL version. > > -Doug > > [1] - https://bitbucket.org/zornslemon/jnumeric-ra/overview > [2] - http://jyni.org/ > > > >> >> - Jeff >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.blank at gmail.com Mon Jun 2 19:28:56 2014 From: doug.blank at gmail.com (Doug Blank) Date: Mon, 2 Jun 2014 13:28:56 -0400 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Mon, Jun 2, 2014 at 1:13 PM, Steve Baer wrote: > We should also check with the Enthought guys to see what they did and if > they are willing to share. > That is the code here that I mentioned in the first post: https://www.enthought.com/repo/.iron/ I can't tell what license this is done under. Also, it is Windows only. It looks like a Cython for IronPython might be closer to working: https://bitbucket.org/cwitty/cython-for-ironpython/overview Especially here considering these: https://bitbucket.org/cwitty/cython-for-ironpython/src/9dc7e1a2d56a/Cython/Includes/posix/?at=default https://bitbucket.org/cwitty/cython-for-ironpython/commits/9dc7e1a2d56a3e3c8a7b3c7a5d23073a54b2b814 But, I haven't worked much with Python and C libraries. -Doug > > -Steve > > > On Sun, Jun 1, 2014 at 6:29 AM, Doug Blank wrote: > >> On Sun, Jun 1, 2014 at 3:05 AM, Jeff Hardy wrote: >> >>> On Sat, May 31, 2014 at 3:23 PM, Doug Blank >>> wrote: >>> >>>> On Fri, May 30, 2014 at 5:22 PM, Steve Baer wrote: >>>> >>>>> I would definitely be interested in helping, but don't exactly know >>>>> where to start. We have a lot of users who would love to get access to >>>>> numpy on our OSX and 64bit windows versions of our product. This is only >>>>> going to become a bigger problem in the future since we will probably only >>>>> have a 64bit version for Window in the next version of Rhino. >>>>> >>>> >>>> I agree. Let's start a serious discussion about how to solve the lack >>>> of numpy in IronPython. >>>> >>>> We could look into using ctypes (IronClad?) and wrap what already >>>> exists. >>>> >>>> We could look into a cross-platform DLL drop-in replacement. >>>> >>>> Between speed and compatibility, initially I'm most interested in >>>> compatibility. But speed should be a long term goal. >>>> >>>> We could write a pure-python prototype initially, and slowly move that >>>> to C#, or another CLR language. That would be useful for all non-C-based >>>> Python implementations, and would probably be quickest to write and test. >>>> >>>> A related note: Python3 just added a new matrix multiplication operator >>>> [1]. Hope to see more numpy-related functionality in standard Python in the >>>> future. >>>> >>>> Other ideas? Where to start? >>>> >>> >>> I would start by asking the NumPy team what the best option is, and >>> seeing what the NumPyPy team are doing - the more work that can be shared, >>> the better. >>> >> >> That is a good idea, and I started to try to define what we might mean by >> "best option." Almost all of the work I have seen coming from most related >> projects would define best in terms of speed. However, this comes at a >> pretty big cost in terms of maintenance---for example, having wrapped C >> libraries compiled for each platform, for each bus size (32, 64). >> >> In poking around, I think the Jython needs might be closest to ours. >> Looks like they have two starts: a pure-Java library based on the older >> Numeric [1], and a native interface for talking directly to C [2]. The >> native interface is a long term project, and is not yet to the point of >> working with numpy. As a quick test, I tried to IKVM-convert the pure Java >> jar file into a DLL. I think that could work (eventually) but would require >> bringing a lot of Jython, and would probably always be a little wonky. In >> addition, that API is the older Numeric. >> >> A third option, as mentioned by Ivan, is to have a bridge to CPython >> interfacing with numpy. But that would make CPython a dependency---not >> really something that many IronPython users would appreciate. >> >> Of course, if one is looking for doing numeric operations, you could use >> a different .NET/Mono math library. But what I am interested in is the >> numpy API, so that other code will be usable in IronPython. >> >> To me, it looks like the best bet at this point in time is to write our >> own. The next question is: write it in pure Python, or C#/F#/etc. >> >> If we write it in pure Python, there is the chance that some Jython >> developers (and maybe other Python implementation people) might be >> interested in helping. It would be slow, but we could rely on Python for >> handling type operations (float times int) to do the right thing. It would >> also be immediately usable by future CPython users. It could be the case >> that a future Python implementation could do some JITting to make it run >> fast enough. (A pure Python version might also be useful for educational >> uses, as I presume it would be more easily understood by Python students). >> >> If we write it as a CLR library, it would be as fast as managed code >> would allow, and be available for other CLR languages (like F#, Boo, etc). >> But we would probably be alone in developing it, as it is mostly of >> interest to Python users using numpy and the CLR. >> >> I guess I am leaning towards a pure Python implementation of the latest >> numpy API. Perhaps followed up by a DLL version. >> >> -Doug >> >> [1] - https://bitbucket.org/zornslemon/jnumeric-ra/overview >> [2] - http://jyni.org/ >> >> >> >>> >>> - Jeff >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Tue Jun 3 12:30:19 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Tue, 3 Jun 2014 12:30:19 +0200 Subject: [Ironpython-users] cython-for-ironpython Message-ID: Hi, I am trying to get hello world with cython. I took a simple cnumop.cpp from tests and run: ipy cython.py --dotnet tests/compile/cnumop.pyx The resulting file (cnumop.cpp) is now part of project (cpp/clr/class library). After resolving usual reference dependencies I hit: PyErr_Format Since this is a symbol which is coming out of cpython, I am looking for equivalent in clr world. I looked in cython itself but found nothing (mapping?) Is there some sort of mapping dll (cpython symbols => iron symbols)? Or perhaps I am doing something wrong when invoking cython? Any help or information about this is greatly appreciated. --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Tue Jun 3 14:15:34 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Tue, 3 Jun 2014 14:15:34 +0200 Subject: [Ironpython-users] Fwd: cython-for-ironpython In-Reply-To: References: <727D8E16AE957149B447FE368139F2B539BDF3CB@SERVER10> Message-ID: ---------- Forwarded message ---------- From: Pawel Jasinski Date: Tue, Jun 3, 2014 at 2:15 PM Subject: Re: [Ironpython-users] cython-for-ironpython To: Markus Schaber sorry I should mention it more explicit, I am trying to resurrect cython-for-ironpython https://bitbucket.org/jasonmccampbell/cython-for-ironpython The hint is well hidden as '--dotnet' parameter to cython :-) The code generated is intended for c++/clr Somehow, based on the posts here: http://blog.enthought.com/python/scipy-for-net/#.U42jOygfzm_ (interesting bits at the bottom), I have assumed that I can use cython-for-ironpython to create c++/clr module and use it directly from ironpython. I also assumed, that as long as I don't link with cpython extension and stay within what comes out of cython, there are no external dependencies. The post does not mention ironclad, but I will check. Perhaps it is a silent dependency. I also looked at the generated code and I can see a reference to CallSite which is marked in ironpython assembly as internal. So, either things changed a bit since 2011 or they had a custom build of IronPython. --pawel On Tue, Jun 3, 2014 at 1:47 PM, Markus Schaber wrote: > Hi, Pawel, > > > > IronPython has completely different inner workings than cPython. It uses > .NET and the DLR infrastructure (memory management, objects, etc..) instead > of the C-implemented Infrastructure provided by cPython. > > > > Two projects tried to bridge the gap between cPython and IronPython / .NET > > > > https://code.google.com/p/ironclad/: Import cPython extensions in > IronPython, this is what could match your "mapping dll" requirement.) > > > > http://pythonnet.sourceforge.net/: Allows cPython code to call into .NET > code. > > > > Both projects look rather dormant nowadays. > > > > So it is not easily possible to use output compiled by cython within > IronPython. > > > > The same applies to Jython and some other alternative implementations. > > > > On the other hand, IronPython and the DLR provide powerful Just-in-Time > capabilities, so you may get the desired speed without actually using > Cython. > > > > > > Best regards > > Markus Schaber > > *CODESYS?* a trademark of 3S-Smart Software Solutions GmbH > > *Inspiring Automation Solutions * > ------------------------------ > > 3S-Smart Software Solutions GmbH > Dipl.-Inf. Markus Schaber | Product Development Core Technology > Memminger Str. 151 | 87439 Kempten | Germany > Tel. +49-831-54031-979 | Fax +49-831-54031-50 > > E-Mail: m.schaber at codesys.com | Web: codesys.com > | CODESYS store: store.codesys.com > CODESYS forum: forum.codesys.com > > *Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner* | *Trade > register: Kempten HRB 6186* | *Tax ID No.: DE 167014915* > > *Von:* Ironpython-users [mailto:ironpython-users-bounces+m.schaber= > codesys.com at python.org] *Im Auftrag von *Pawel Jasinski > *Gesendet:* Dienstag, 3. Juni 2014 12:30 > *An:* ironpython-users at python.org > *Betreff:* [Ironpython-users] cython-for-ironpython > > > > Hi, > > I am trying to get hello world with cython. I took a simple cnumop.cpp > from tests and run: > > ipy cython.py --dotnet tests/compile/cnumop.pyx > > The resulting file (cnumop.cpp) is now part of project (cpp/clr/class > library). > > After resolving usual reference dependencies I hit: PyErr_Format > > Since this is a symbol which is coming out of cpython, I am looking for > equivalent in clr world. > > I looked in cython itself but found nothing (mapping?) > > Is there some sort of mapping dll (cpython symbols => iron symbols)? > > Or perhaps I am doing something wrong when invoking cython? > > > > Any help or information about this is greatly appreciated. > > --pawel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.schaber at codesys.com Tue Jun 3 13:47:51 2014 From: m.schaber at codesys.com (Markus Schaber) Date: Tue, 3 Jun 2014 11:47:51 +0000 Subject: [Ironpython-users] cython-for-ironpython In-Reply-To: References: Message-ID: <727D8E16AE957149B447FE368139F2B539BDF3CB@SERVER10> Hi, Pawel, IronPython has completely different inner workings than cPython. It uses .NET and the DLR infrastructure (memory management, objects, etc..) instead of the C-implemented Infrastructure provided by cPython. Two projects tried to bridge the gap between cPython and IronPython / .NET https://code.google.com/p/ironclad/: Import cPython extensions in IronPython, this is what could match your "mapping dll" requirement.) http://pythonnet.sourceforge.net/: Allows cPython code to call into .NET code. Both projects look rather dormant nowadays. So it is not easily possible to use output compiled by cython within IronPython. The same applies to Jython and some other alternative implementations. On the other hand, IronPython and the DLR provide powerful Just-in-Time capabilities, so you may get the desired speed without actually using Cython. Best regards Markus Schaber CODESYS? a trademark of 3S-Smart Software Solutions GmbH Inspiring Automation Solutions ________________________________ 3S-Smart Software Solutions GmbH Dipl.-Inf. Markus Schaber | Product Development Core Technology Memminger Str. 151 | 87439 Kempten | Germany Tel. +49-831-54031-979 | Fax +49-831-54031-50 E-Mail: m.schaber at codesys.com | Web: codesys.com | CODESYS store: store.codesys.com CODESYS forum: forum.codesys.com Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade register: Kempten HRB 6186 | Tax ID No.: DE 167014915 Von: Ironpython-users [mailto:ironpython-users-bounces+m.schaber=codesys.com at python.org] Im Auftrag von Pawel Jasinski Gesendet: Dienstag, 3. Juni 2014 12:30 An: ironpython-users at python.org Betreff: [Ironpython-users] cython-for-ironpython Hi, I am trying to get hello world with cython. I took a simple cnumop.cpp from tests and run: ipy cython.py --dotnet tests/compile/cnumop.pyx The resulting file (cnumop.cpp) is now part of project (cpp/clr/class library). After resolving usual reference dependencies I hit: PyErr_Format Since this is a symbol which is coming out of cpython, I am looking for equivalent in clr world. I looked in cython itself but found nothing (mapping?) Is there some sort of mapping dll (cpython symbols => iron symbols)? Or perhaps I am doing something wrong when invoking cython? Any help or information about this is greatly appreciated. --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.blank at gmail.com Tue Jun 3 14:39:57 2014 From: doug.blank at gmail.com (Doug Blank) Date: Tue, 3 Jun 2014 08:39:57 -0400 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: I've tried to capture all of the links and options towards developing a numpy for IronPython here: http://calicoproject.org/Numpy Please let me know if I have missed something, or if you would like to add to that page. My summary so far: 1) a new wrapper (based on IronClad or Cython) would create fast code, and could utilize the current and future versions of numpy, but would (I suspect) be low-level, high-maintenance in keeping it working on multi-platforms/architectures. This wrapper work might also allow other packages in the SciPy to work. Requires low-level C#, CLR, and IronPython-specific skills to develop (a small group of people have these skills; only IronPython users would benefit). 2) A pure-Python version would be a lot of work (perhaps building on PyPy's RPython version and converting their C) and be slow, but would be little maintenance as most of the details for the current version of numpy would be static. Requires generic Python skills to develop (a large group of people have these skills; any generic Python implementation could use). -Doug On Mon, Jun 2, 2014 at 2:09 PM, Doug Blank wrote: > On Mon, Jun 2, 2014 at 1:41 PM, Steve Baer wrote: > >> > That is the code here that I mentioned in the first post: >> > https://www.enthought.com/repo/.iron/ >> >> Unless I'm missing something, that's the binary installation and not the >> actual source code. I believe Enthought used managed C++ which does tie it >> to Windows, but it would also be good to see since it would give me a good >> idea about the effort involved in trying to make a C# DLL with pInvokes to >> a C DLL. I've done a lot of this type of work in the past and it usually >> isn't very hard once I've gotten things set up correctly (like having an >> executable to automatically generate the pinvokes from the exported C >> functions.) >> > > I think you are correct. A bit more googling found: > > http://blog.enthought.com/python/scipy-for-net/#.U4y83x92nfE > > which states: > > """ > The first release of SciPy and NumPy for .NET are available now as binary > distributions from SciPy.org or directly from Enthought. All of the code > for these and the supporting projects are open source and available at the > links below. > > NumPy for .NET:https://github.com/numpy/numpy-refactor > SciPy for .NET:https://github.com/jasonmccampbell/scipy-refactor > Cython for .NET: > https://bitbucket.org/jasonmccampbell/cython-for-ironpython > FWrap extensions:https://github.com/jasonmccampbell/fwrap > """ > > So it looks like the Enthought project was using the cython-for-ironpython. > > Please let us know what you think about making this work for C#; thanks! > > -Doug > > >> >> -Steve >> >> Steve Baer >> Robert McNeel & Associates >> www.rhino3d.com >> >> >> On Mon, Jun 2, 2014 at 10:28 AM, Doug Blank wrote: >> >>> On Mon, Jun 2, 2014 at 1:13 PM, Steve Baer wrote: >>> >>>> We should also check with the Enthought guys to see what they did and >>>> if they are willing to share. >>>> >>> >>> That is the code here that I mentioned in the first post: >>> >>> https://www.enthought.com/repo/.iron/ >>> >>> I can't tell what license this is done under. Also, it is Windows only. >>> >>> It looks like a Cython for IronPython might be closer to working: >>> >>> https://bitbucket.org/cwitty/cython-for-ironpython/overview >>> >>> Especially here considering these: >>> >>> >>> https://bitbucket.org/cwitty/cython-for-ironpython/src/9dc7e1a2d56a/Cython/Includes/posix/?at=default >>> >>> https://bitbucket.org/cwitty/cython-for-ironpython/commits/9dc7e1a2d56a3e3c8a7b3c7a5d23073a54b2b814 >>> >>> But, I haven't worked much with Python and C libraries. >>> >>> -Doug >>> >>> >>>> >>>> -Steve >>>> >>>> >>>> On Sun, Jun 1, 2014 at 6:29 AM, Doug Blank >>>> wrote: >>>> >>>>> On Sun, Jun 1, 2014 at 3:05 AM, Jeff Hardy wrote: >>>>> >>>>>> On Sat, May 31, 2014 at 3:23 PM, Doug Blank >>>>>> wrote: >>>>>> >>>>>>> On Fri, May 30, 2014 at 5:22 PM, Steve Baer >>>>>>> wrote: >>>>>>> >>>>>>>> I would definitely be interested in helping, but don't exactly know >>>>>>>> where to start. We have a lot of users who would love to get access to >>>>>>>> numpy on our OSX and 64bit windows versions of our product. This is only >>>>>>>> going to become a bigger problem in the future since we will probably only >>>>>>>> have a 64bit version for Window in the next version of Rhino. >>>>>>>> >>>>>>> >>>>>>> I agree. Let's start a serious discussion about how to solve the >>>>>>> lack of numpy in IronPython. >>>>>>> >>>>>>> We could look into using ctypes (IronClad?) and wrap what already >>>>>>> exists. >>>>>>> >>>>>>> We could look into a cross-platform DLL drop-in replacement. >>>>>>> >>>>>>> Between speed and compatibility, initially I'm most interested in >>>>>>> compatibility. But speed should be a long term goal. >>>>>>> >>>>>>> We could write a pure-python prototype initially, and slowly move >>>>>>> that to C#, or another CLR language. That would be useful for all >>>>>>> non-C-based Python implementations, and would probably be quickest to write >>>>>>> and test. >>>>>>> >>>>>>> A related note: Python3 just added a new matrix multiplication >>>>>>> operator [1]. Hope to see more numpy-related functionality in standard >>>>>>> Python in the future. >>>>>>> >>>>>>> Other ideas? Where to start? >>>>>>> >>>>>> >>>>>> I would start by asking the NumPy team what the best option is, and >>>>>> seeing what the NumPyPy team are doing - the more work that can be shared, >>>>>> the better. >>>>>> >>>>> >>>>> That is a good idea, and I started to try to define what we might mean >>>>> by "best option." Almost all of the work I have seen coming from most >>>>> related projects would define best in terms of speed. However, this comes >>>>> at a pretty big cost in terms of maintenance---for example, having wrapped >>>>> C libraries compiled for each platform, for each bus size (32, 64). >>>>> >>>>> In poking around, I think the Jython needs might be closest to ours. >>>>> Looks like they have two starts: a pure-Java library based on the older >>>>> Numeric [1], and a native interface for talking directly to C [2]. The >>>>> native interface is a long term project, and is not yet to the point of >>>>> working with numpy. As a quick test, I tried to IKVM-convert the pure Java >>>>> jar file into a DLL. I think that could work (eventually) but would require >>>>> bringing a lot of Jython, and would probably always be a little wonky. In >>>>> addition, that API is the older Numeric. >>>>> >>>>> A third option, as mentioned by Ivan, is to have a bridge to CPython >>>>> interfacing with numpy. But that would make CPython a dependency---not >>>>> really something that many IronPython users would appreciate. >>>>> >>>>> Of course, if one is looking for doing numeric operations, you could >>>>> use a different .NET/Mono math library. But what I am interested in is the >>>>> numpy API, so that other code will be usable in IronPython. >>>>> >>>>> To me, it looks like the best bet at this point in time is to write >>>>> our own. The next question is: write it in pure Python, or C#/F#/etc. >>>>> >>>>> If we write it in pure Python, there is the chance that some Jython >>>>> developers (and maybe other Python implementation people) might be >>>>> interested in helping. It would be slow, but we could rely on Python for >>>>> handling type operations (float times int) to do the right thing. It would >>>>> also be immediately usable by future CPython users. It could be the case >>>>> that a future Python implementation could do some JITting to make it run >>>>> fast enough. (A pure Python version might also be useful for educational >>>>> uses, as I presume it would be more easily understood by Python students). >>>>> >>>>> If we write it as a CLR library, it would be as fast as managed code >>>>> would allow, and be available for other CLR languages (like F#, Boo, etc). >>>>> But we would probably be alone in developing it, as it is mostly of >>>>> interest to Python users using numpy and the CLR. >>>>> >>>>> I guess I am leaning towards a pure Python implementation of the >>>>> latest numpy API. Perhaps followed up by a DLL version. >>>>> >>>>> -Doug >>>>> >>>>> [1] - https://bitbucket.org/zornslemon/jnumeric-ra/overview >>>>> [2] - http://jyni.org/ >>>>> >>>>> >>>>> >>>>>> >>>>>> - Jeff >>>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olof.bjarnason at gmail.com Tue Jun 3 14:53:43 2014 From: olof.bjarnason at gmail.com (Olof Bjarnason) Date: Tue, 3 Jun 2014 13:53:43 +0100 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: Why isn't CPython+NumPy+SciPy (or what you need on top of NumPy) enough? It's been tested and maintained for a long time, and works quite well? It does seem like a daunting task to try and build and maintain something separate from the mainline NumPy/SciPy community... On 3 June 2014 13:39, Doug Blank wrote: > I've tried to capture all of the links and options towards developing a > numpy for IronPython here: > > http://calicoproject.org/Numpy > > Please let me know if I have missed something, or if you would like to add > to that page. > > My summary so far: > > 1) a new wrapper (based on IronClad or Cython) would create fast code, and > could utilize the current and future versions of numpy, but would (I > suspect) be low-level, high-maintenance in keeping it working on > multi-platforms/architectures. This wrapper work might also allow other > packages in the SciPy to work. Requires low-level C#, CLR, and > IronPython-specific skills to develop (a small group of people have these > skills; only IronPython users would benefit). > > 2) A pure-Python version would be a lot of work (perhaps building on PyPy's > RPython version and converting their C) and be slow, but would be little > maintenance as most of the details for the current version of numpy would be > static. Requires generic Python skills to develop (a large group of people > have these skills; any generic Python implementation could use). > > -Doug > > > On Mon, Jun 2, 2014 at 2:09 PM, Doug Blank wrote: >> >> On Mon, Jun 2, 2014 at 1:41 PM, Steve Baer wrote: >>> >>> > That is the code here that I mentioned in the first post: >>> > https://www.enthought.com/repo/.iron/ >>> >>> Unless I'm missing something, that's the binary installation and not the >>> actual source code. I believe Enthought used managed C++ which does tie it >>> to Windows, but it would also be good to see since it would give me a good >>> idea about the effort involved in trying to make a C# DLL with pInvokes to a >>> C DLL. I've done a lot of this type of work in the past and it usually isn't >>> very hard once I've gotten things set up correctly (like having an >>> executable to automatically generate the pinvokes from the exported C >>> functions.) >> >> >> I think you are correct. A bit more googling found: >> >> http://blog.enthought.com/python/scipy-for-net/#.U4y83x92nfE >> >> which states: >> >> """ >> The first release of SciPy and NumPy for .NET are available now as binary >> distributions from SciPy.org or directly from Enthought. All of the code for >> these and the supporting projects are open source and available at the links >> below. >> >> NumPy for .NET:https://github.com/numpy/numpy-refactor >> SciPy for .NET:https://github.com/jasonmccampbell/scipy-refactor >> Cython for .NET: >> https://bitbucket.org/jasonmccampbell/cython-for-ironpython >> FWrap extensions:https://github.com/jasonmccampbell/fwrap >> """ >> >> So it looks like the Enthought project was using the >> cython-for-ironpython. >> >> Please let us know what you think about making this work for C#; thanks! >> >> -Doug >> >>> >>> >>> -Steve >>> >>> Steve Baer >>> Robert McNeel & Associates >>> www.rhino3d.com >>> >>> >>> On Mon, Jun 2, 2014 at 10:28 AM, Doug Blank wrote: >>>> >>>> On Mon, Jun 2, 2014 at 1:13 PM, Steve Baer wrote: >>>>> >>>>> We should also check with the Enthought guys to see what they did and >>>>> if they are willing to share. >>>> >>>> >>>> That is the code here that I mentioned in the first post: >>>> >>>> https://www.enthought.com/repo/.iron/ >>>> >>>> I can't tell what license this is done under. Also, it is Windows only. >>>> >>>> It looks like a Cython for IronPython might be closer to working: >>>> >>>> https://bitbucket.org/cwitty/cython-for-ironpython/overview >>>> >>>> Especially here considering these: >>>> >>>> >>>> https://bitbucket.org/cwitty/cython-for-ironpython/src/9dc7e1a2d56a/Cython/Includes/posix/?at=default >>>> >>>> https://bitbucket.org/cwitty/cython-for-ironpython/commits/9dc7e1a2d56a3e3c8a7b3c7a5d23073a54b2b814 >>>> >>>> But, I haven't worked much with Python and C libraries. >>>> >>>> -Doug >>>> >>>>> >>>>> >>>>> -Steve >>>>> >>>>> >>>>> On Sun, Jun 1, 2014 at 6:29 AM, Doug Blank >>>>> wrote: >>>>>> >>>>>> On Sun, Jun 1, 2014 at 3:05 AM, Jeff Hardy wrote: >>>>>>> >>>>>>> On Sat, May 31, 2014 at 3:23 PM, Doug Blank >>>>>>> wrote: >>>>>>>> >>>>>>>> On Fri, May 30, 2014 at 5:22 PM, Steve Baer >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> I would definitely be interested in helping, but don't exactly know >>>>>>>>> where to start. We have a lot of users who would love to get access to numpy >>>>>>>>> on our OSX and 64bit windows versions of our product. This is only going to >>>>>>>>> become a bigger problem in the future since we will probably only have a >>>>>>>>> 64bit version for Window in the next version of Rhino. >>>>>>>> >>>>>>>> >>>>>>>> I agree. Let's start a serious discussion about how to solve the >>>>>>>> lack of numpy in IronPython. >>>>>>>> >>>>>>>> We could look into using ctypes (IronClad?) and wrap what already >>>>>>>> exists. >>>>>>>> >>>>>>>> We could look into a cross-platform DLL drop-in replacement. >>>>>>>> >>>>>>>> Between speed and compatibility, initially I'm most interested in >>>>>>>> compatibility. But speed should be a long term goal. >>>>>>>> >>>>>>>> We could write a pure-python prototype initially, and slowly move >>>>>>>> that to C#, or another CLR language. That would be useful for all >>>>>>>> non-C-based Python implementations, and would probably be quickest to write >>>>>>>> and test. >>>>>>>> >>>>>>>> A related note: Python3 just added a new matrix multiplication >>>>>>>> operator [1]. Hope to see more numpy-related functionality in standard >>>>>>>> Python in the future. >>>>>>>> >>>>>>>> Other ideas? Where to start? >>>>>>> >>>>>>> >>>>>>> I would start by asking the NumPy team what the best option is, and >>>>>>> seeing what the NumPyPy team are doing - the more work that can be shared, >>>>>>> the better. >>>>>> >>>>>> >>>>>> That is a good idea, and I started to try to define what we might mean >>>>>> by "best option." Almost all of the work I have seen coming from most >>>>>> related projects would define best in terms of speed. However, this comes at >>>>>> a pretty big cost in terms of maintenance---for example, having wrapped C >>>>>> libraries compiled for each platform, for each bus size (32, 64). >>>>>> >>>>>> In poking around, I think the Jython needs might be closest to ours. >>>>>> Looks like they have two starts: a pure-Java library based on the older >>>>>> Numeric [1], and a native interface for talking directly to C [2]. The >>>>>> native interface is a long term project, and is not yet to the point of >>>>>> working with numpy. As a quick test, I tried to IKVM-convert the pure Java >>>>>> jar file into a DLL. I think that could work (eventually) but would require >>>>>> bringing a lot of Jython, and would probably always be a little wonky. In >>>>>> addition, that API is the older Numeric. >>>>>> >>>>>> A third option, as mentioned by Ivan, is to have a bridge to CPython >>>>>> interfacing with numpy. But that would make CPython a dependency---not >>>>>> really something that many IronPython users would appreciate. >>>>>> >>>>>> Of course, if one is looking for doing numeric operations, you could >>>>>> use a different .NET/Mono math library. But what I am interested in is the >>>>>> numpy API, so that other code will be usable in IronPython. >>>>>> >>>>>> To me, it looks like the best bet at this point in time is to write >>>>>> our own. The next question is: write it in pure Python, or C#/F#/etc. >>>>>> >>>>>> If we write it in pure Python, there is the chance that some Jython >>>>>> developers (and maybe other Python implementation people) might be >>>>>> interested in helping. It would be slow, but we could rely on Python for >>>>>> handling type operations (float times int) to do the right thing. It would >>>>>> also be immediately usable by future CPython users. It could be the case >>>>>> that a future Python implementation could do some JITting to make it run >>>>>> fast enough. (A pure Python version might also be useful for educational >>>>>> uses, as I presume it would be more easily understood by Python students). >>>>>> >>>>>> If we write it as a CLR library, it would be as fast as managed code >>>>>> would allow, and be available for other CLR languages (like F#, Boo, etc). >>>>>> But we would probably be alone in developing it, as it is mostly of interest >>>>>> to Python users using numpy and the CLR. >>>>>> >>>>>> I guess I am leaning towards a pure Python implementation of the >>>>>> latest numpy API. Perhaps followed up by a DLL version. >>>>>> >>>>>> -Doug >>>>>> >>>>>> [1] - https://bitbucket.org/zornslemon/jnumeric-ra/overview >>>>>> [2] - http://jyni.org/ >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> - Jeff >>>>>> >>>>>> >>>>> >>>> >>> >> > > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > From wsadkin at ParlanceCorp.com Tue Jun 3 22:08:08 2014 From: wsadkin at ParlanceCorp.com (Will Sadkin) Date: Tue, 3 Jun 2014 20:08:08 +0000 Subject: [Ironpython-users] UNSUBSCRIBE Message-ID: <75BADF2D5A2DD345819BDC79C664DA4D0152F588@Exchange2010.nameconnector.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Wed Jun 4 16:46:50 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Wed, 4 Jun 2014 16:46:50 +0200 Subject: [Ironpython-users] cython-for-ironpython In-Reply-To: References: <727D8E16AE957149B447FE368139F2B539BDF3CB@SERVER10> Message-ID: I have sorted it out. The cython-for-ironpython works as long as you stay with functionality used by numpy/scipy. Picking random test may expose thing which were simply not implemented. There is no dependency on ironclad. CallSite comes out of System.Core assembly - missing reference. --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Wed Jun 4 16:52:18 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Wed, 4 Jun 2014 16:52:18 +0200 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: I took the old version of numpy for a spin with 2.7.5b2 I had to adjust a couple of things, but it compiles and appears to work. The build instructions: https://github.com/numpy/numpy-refactor/wiki/Recompile --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.blank at gmail.com Wed Jun 4 16:52:57 2014 From: doug.blank at gmail.com (Doug Blank) Date: Wed, 4 Jun 2014 10:52:57 -0400 Subject: [Ironpython-users] cython-for-ironpython In-Reply-To: References: <727D8E16AE957149B447FE368139F2B539BDF3CB@SERVER10> Message-ID: On Wed, Jun 4, 2014 at 10:46 AM, Pawel Jasinski wrote: > I have sorted it out. The cython-for-ironpython works as long as you stay > with functionality used by numpy/scipy. Picking random test may expose > thing which were simply not implemented. > There is no dependency on ironclad. > CallSite comes out of System.Core assembly - missing reference. > Pawel, can you make your example(s) available? I think it would be very useful to those of us trying to evaluate the numpy options. -Doug > > --pawel > > > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.blank at gmail.com Thu Jun 5 13:28:32 2014 From: doug.blank at gmail.com (Doug Blank) Date: Thu, 5 Jun 2014 07:28:32 -0400 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: I was able to make contact with Timothy Hochberg and Mark DeArman and get the source code to a couple of older numeric/numpy Python libraries. I'm not sure exactly how old these are, nor what version of numpy they implement, but the code can be found here: https://bitbucket.org/dblank/pure-numpy/overview src/psymeric is from Timothy and the rest is from Mark. bin/ is the result of building the rest (which includes C/C++ built with VS2012) but I don't see a build project. -Doug On Wed, Jun 4, 2014 at 10:52 AM, Pawel Jasinski wrote: > I took the old version of numpy for a spin with 2.7.5b2 > > I had to adjust a couple of things, but it compiles and appears to work. > The build instructions: > https://github.com/numpy/numpy-refactor/wiki/Recompile > > --pawel > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Thu Jun 5 14:37:52 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Thu, 5 Jun 2014 14:37:52 +0200 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: I would recommend running a diff between the "pure-numpy" and https://github.com/numpy/numpy-refactor --pawel On Thu, Jun 5, 2014 at 1:28 PM, Doug Blank wrote: > I was able to make contact with Timothy Hochberg and Mark DeArman and get > the source code to a couple of older numeric/numpy Python libraries. > > I'm not sure exactly how old these are, nor what version of numpy they > implement, but the code can be found here: > > https://bitbucket.org/dblank/pure-numpy/overview > > src/psymeric is from Timothy and the rest is from Mark. bin/ is the result > of building the rest (which includes C/C++ built with VS2012) but I don't > see a build project. > > -Doug > > > > On Wed, Jun 4, 2014 at 10:52 AM, Pawel Jasinski > wrote: > >> I took the old version of numpy for a spin with 2.7.5b2 >> >> I had to adjust a couple of things, but it compiles and appears to work. >> The build instructions: >> https://github.com/numpy/numpy-refactor/wiki/Recompile >> >> --pawel >> >> _______________________________________________ >> Ironpython-users mailing list >> Ironpython-users at python.org >> https://mail.python.org/mailman/listinfo/ironpython-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Thu Jun 5 15:20:38 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Thu, 5 Jun 2014 15:20:38 +0200 Subject: [Ironpython-users] Fwd: numpy in IronPython In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Pawel Jasinski Date: Thu, Jun 5, 2014 at 3:18 PM Subject: Re: [Ironpython-users] numpy in IronPython To: Doug Blank Projects are the same except: - Marc changed namespace - Marc removed some projects (e.g. f2py) - diff reports to much (numpy/NumpyDotNet/ndarray.cs) probably cr/lf should be ignored/fixed - missing build files are present in numpy-refactor (iron_setup.py, *.sln) looking at diff in this format hurts. Please, use kdiff3 or equivalent. --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Fri Jun 6 19:47:59 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Fri, 6 Jun 2014 19:47:59 +0200 Subject: [Ironpython-users] launcher cp#35064 Message-ID: we have a moded version of pylauncher https://gist.github.com/paweljasinski/5e0b0b59648c6f85c489 as described in https://ironpython.codeplex.com/workitem/35064 Vernon agreed to give it a spin. What I am trying to figure out is distribution and ip integration. I would be very happy if it could make to 2.7.5 msi. Is it realistic? Should I just add a cproject to ironlanguages? I am also not particularly skilled with Wix magic, can anybody help? --pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwpowellhtx at gmail.com Tue Jun 3 23:27:12 2014 From: mwpowellhtx at gmail.com (Michael Powell) Date: Tue, 3 Jun 2014 16:27:12 -0500 Subject: [Ironpython-users] Embedding obspy and dependencies Message-ID: Hello, I am doing some seismic work and would like to embed obspy and required dependencies in an IronPython-based C# .NET assembly. In and of itself, IronPython is straightforward enough. What I am not so clear on is how to configure libraries, dependencies, and so on. For instance, we could start from readily available binaries, but would we have difficulty with supported Python runtime versions, things of this nature. Would anyone care to comment here? Or has anyone done this type of thing already? Or even specifically working with obspy? Thank you. Best regards, Michael Powell From doug.blank at gmail.com Thu Jun 5 14:57:18 2014 From: doug.blank at gmail.com (Doug Blank) Date: Thu, 5 Jun 2014 08:57:18 -0400 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Thu, Jun 5, 2014 at 8:37 AM, Pawel Jasinski wrote: > I would recommend running a diff between the "pure-numpy" and > https://github.com/numpy/numpy-refactor > diff -r pure-numpy/src/numpy/ numpy-refactor/numpy/ Attached. What does this tell us? -Doug > > --pawel > > > > On Thu, Jun 5, 2014 at 1:28 PM, Doug Blank wrote: > >> I was able to make contact with Timothy Hochberg and Mark DeArman and get >> the source code to a couple of older numeric/numpy Python libraries. >> >> I'm not sure exactly how old these are, nor what version of numpy they >> implement, but the code can be found here: >> >> https://bitbucket.org/dblank/pure-numpy/overview >> >> src/psymeric is from Timothy and the rest is from Mark. bin/ is the >> result of building the rest (which includes C/C++ built with VS2012) but I >> don't see a build project. >> >> -Doug >> >> >> >> On Wed, Jun 4, 2014 at 10:52 AM, Pawel Jasinski > > wrote: >> >>> I took the old version of numpy for a spin with 2.7.5b2 >>> >>> I had to adjust a couple of things, but it compiles and appears to work. >>> The build instructions: >>> https://github.com/numpy/numpy-refactor/wiki/Recompile >>> >>> --pawel >>> >>> _______________________________________________ >>> Ironpython-users mailing list >>> Ironpython-users at python.org >>> https://mail.python.org/mailman/listinfo/ironpython-users >>> >>> >> > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Only in pure-numpy/src/numpy/: __config__.py diff -r pure-numpy/src/numpy/core/__init__.py numpy-refactor/numpy/core/__init__.py 4c4 < #from numpy.version import version as __version__ --- > from numpy.version import version as __version__ 53,55c53,55 < #from numpy.testing import Tester < #test = Tester(__file__).test < #bench = Tester(__file__).bench --- > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench diff -r pure-numpy/src/numpy/core/multiarray_clr.py numpy-refactor/numpy/core/multiarray_clr.py 6,7c6,7 < clr.AddReference("Numpy"); < from Cascade.VTFA.Python.Numpy import * --- > clr.AddReference("NumpyDotNet"); > from NumpyDotNet import * 9,10c9,10 < from Cascade.VTFA.Python.Numpy.ModuleMethods import * < import Cascade.VTFA.Python.Numpy.ModuleMethods as NDNMM --- > from NumpyDotNet.ModuleMethods import * > import NumpyDotNet.ModuleMethods as NDNMM Only in numpy-refactor/numpy/core: tests diff -r pure-numpy/src/numpy/core/umath_clr.py numpy-refactor/numpy/core/umath_clr.py 8,10c8,10 < clr.AddReference("Numpy") < import Cascade.VTFA.Python.Numpy < Cascade.VTFA.Python.Numpy.umath.__init__() --- > clr.AddReference("NumpyDotNet") > import NumpyDotNet > NumpyDotNet.umath.__init__() 13c13 < from Cascade.VTFA.Python.Numpy.umath import * --- > from NumpyDotNet.umath import * Only in pure-numpy/src/numpy/: Debug Only in numpy-refactor/numpy/: distutils Only in numpy-refactor/numpy/: f2py Only in pure-numpy/src/numpy/fft: bin Only in pure-numpy/src/numpy/fft: Debug diff -r pure-numpy/src/numpy/fft/fftpack_cython.cpp numpy-refactor/numpy/fft/fftpack_cython.cpp 177,180c177,180 < static CYTHON_INLINE int PyArray_CHKFLAGS(Cascade::VTFA::Python::Numpy::ndarray^, int); /*proto*/ < static CYTHON_INLINE void *PyArray_DATA(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t *PyArray_DIMS(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t PyArray_SIZE(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ --- > static CYTHON_INLINE int PyArray_CHKFLAGS(NumpyDotNet::ndarray^, int); /*proto*/ > static CYTHON_INLINE void *PyArray_DATA(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t *PyArray_DIMS(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t PyArray_SIZE(NumpyDotNet::ndarray^); /*proto*/ 187,188c187,188 < static System::Object^ cfftf(Cascade::VTFA::Python::Numpy::ndarray^, Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static System::Object^ cfftb(Cascade::VTFA::Python::Numpy::ndarray^, Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ --- > static System::Object^ cfftf(NumpyDotNet::ndarray^, NumpyDotNet::ndarray^); /*proto*/ > static System::Object^ cfftb(NumpyDotNet::ndarray^, NumpyDotNet::ndarray^); /*proto*/ 190,191c190,191 < static System::Object^ rfftf(Cascade::VTFA::Python::Numpy::ndarray^, Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static System::Object^ rfftb(Cascade::VTFA::Python::Numpy::ndarray^, Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ --- > static System::Object^ rfftf(NumpyDotNet::ndarray^, NumpyDotNet::ndarray^); /*proto*/ > static System::Object^ rfftb(NumpyDotNet::ndarray^, NumpyDotNet::ndarray^); /*proto*/ 258c258 < static System::Object^ cfftf(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op1, Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op2) { --- > static System::Object^ cfftf(NumpyDotNet::ndarray^ __pyx_v_op1, NumpyDotNet::ndarray^ __pyx_v_op2) { 265c265 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_data; --- > NumpyDotNet::ndarray^ __pyx_v_data; 288c288 < if (__pyx_t_3 != nullptr && dynamic_cast(__pyx_t_3) == nullptr) { --- > if (__pyx_t_3 != nullptr && dynamic_cast(__pyx_t_3) == nullptr) { 291c291 < __pyx_v_data = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_3); --- > __pyx_v_data = ((NumpyDotNet::ndarray^)__pyx_t_3); 316c316 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 319c319 < __pyx_v_op2 = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op2 = ((NumpyDotNet::ndarray^)__pyx_t_1); 463c463 < static System::Object^ cfftb(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op1, Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op2) { --- > static System::Object^ cfftb(NumpyDotNet::ndarray^ __pyx_v_op1, NumpyDotNet::ndarray^ __pyx_v_op2) { 518c518 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 521c521 < __pyx_v_op2 = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op2 = ((NumpyDotNet::ndarray^)__pyx_t_1); 543c543 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 551c551 < __pyx_v_npts = (PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))[__pyx_t_5]); --- > __pyx_v_npts = (PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data))[__pyx_t_5]); 586c586 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 589c589 < __pyx_v_nrepeats = PyArray_SIZE(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data)); --- > __pyx_v_nrepeats = PyArray_SIZE(((NumpyDotNet::ndarray^)__pyx_v_data)); 607c607 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 610c610 < __pyx_v_dptr = ((double *)PyArray_DATA(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))); --- > __pyx_v_dptr = ((double *)PyArray_DATA(((NumpyDotNet::ndarray^)__pyx_v_data))); 676c676 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op; --- > NumpyDotNet::ndarray^ __pyx_v_op; 698c698 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 701c701 < __pyx_v_op = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op = ((NumpyDotNet::ndarray^)__pyx_t_1); 736c736 < static System::Object^ rfftf(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op1, Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op2) { --- > static System::Object^ rfftf(NumpyDotNet::ndarray^ __pyx_v_op1, NumpyDotNet::ndarray^ __pyx_v_op2) { 799c799 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 802c802 < __pyx_v_op2 = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op2 = ((NumpyDotNet::ndarray^)__pyx_t_1); 815c815 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 823c823 < __pyx_v_npts = (PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))[__pyx_t_5]); --- > __pyx_v_npts = (PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data))[__pyx_t_5]); 832c832 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 840c840 < (PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))[__pyx_t_6]) = (__Pyx_div_long(__pyx_v_npts, 2) + 1); --- > (PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data))[__pyx_t_6]) = (__Pyx_div_long(__pyx_v_npts, 2) + 1); 852c852 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 855c855 < __pyx_t_1 = PyArray_ZEROS(__pyx_t_7, PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data)), NPY_CDOUBLE, 0); --- > __pyx_t_1 = PyArray_ZEROS(__pyx_t_7, PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data)), NPY_CDOUBLE, 0); 866c866 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 874c874 < (PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))[__pyx_t_8]) = __pyx_v_npts; --- > (PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data))[__pyx_t_8]) = __pyx_v_npts; 883c883 < if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { --- > if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { 891c891 < __pyx_v_rstep = ((PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_ret))[__pyx_t_9]) * 2); --- > __pyx_v_rstep = ((PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_ret))[__pyx_t_9]) * 2); 935c935 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 938c938 < __pyx_v_nrepeats = PyArray_SIZE(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data)); --- > __pyx_v_nrepeats = PyArray_SIZE(((NumpyDotNet::ndarray^)__pyx_v_data)); 956c956 < if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { --- > if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { 959c959 < __pyx_v_rptr = ((double *)PyArray_DATA(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_ret))); --- > __pyx_v_rptr = ((double *)PyArray_DATA(((NumpyDotNet::ndarray^)__pyx_v_ret))); 968c968 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 971c971 < __pyx_v_dptr = ((double *)PyArray_DATA(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))); --- > __pyx_v_dptr = ((double *)PyArray_DATA(((NumpyDotNet::ndarray^)__pyx_v_data))); 1071c1071 < static System::Object^ rfftb(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op1, Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op2) { --- > static System::Object^ rfftb(NumpyDotNet::ndarray^ __pyx_v_op1, NumpyDotNet::ndarray^ __pyx_v_op2) { 1130c1130 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 1133c1133 < __pyx_v_op2 = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op2 = ((NumpyDotNet::ndarray^)__pyx_t_1); 1149c1149 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 1152c1152 < __pyx_t_1 = PyArray_ZEROS(__pyx_t_5, PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data)), NPY_DOUBLE, 0); --- > __pyx_t_1 = PyArray_ZEROS(__pyx_t_5, PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data)), NPY_DOUBLE, 0); 1163c1163 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 1171c1171 < __pyx_v_npts = (PyArray_DIMS(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))[__pyx_t_6]); --- > __pyx_v_npts = (PyArray_DIMS(((NumpyDotNet::ndarray^)__pyx_v_data))[__pyx_t_6]); 1215c1215 < if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { --- > if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { 1218c1218 < __pyx_v_nrepeats = PyArray_SIZE(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_ret)); --- > __pyx_v_nrepeats = PyArray_SIZE(((NumpyDotNet::ndarray^)__pyx_v_ret)); 1236c1236 < if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { --- > if (__pyx_v_ret != nullptr && dynamic_cast(__pyx_v_ret) == nullptr) { 1239c1239 < __pyx_v_rptr = ((double *)PyArray_DATA(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_ret))); --- > __pyx_v_rptr = ((double *)PyArray_DATA(((NumpyDotNet::ndarray^)__pyx_v_ret))); 1248c1248 < if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { --- > if (__pyx_v_data != nullptr && dynamic_cast(__pyx_v_data) == nullptr) { 1251c1251 < __pyx_v_dptr = ((double *)PyArray_DATA(((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_v_data))); --- > __pyx_v_dptr = ((double *)PyArray_DATA(((NumpyDotNet::ndarray^)__pyx_v_data))); 1344c1344 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_op; --- > NumpyDotNet::ndarray^ __pyx_v_op; 1366c1366 < if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { --- > if (__pyx_t_1 != nullptr && dynamic_cast(__pyx_t_1) == nullptr) { 1369c1369 < __pyx_v_op = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_op = ((NumpyDotNet::ndarray^)__pyx_t_1); 1583c1583 < static CYTHON_INLINE int PyArray_CHKFLAGS(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n, int __pyx_v_flags) { --- > static CYTHON_INLINE int PyArray_CHKFLAGS(NumpyDotNet::ndarray^ __pyx_v_n, int __pyx_v_flags) { 1614c1614 < static CYTHON_INLINE void *PyArray_DATA(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE void *PyArray_DATA(NumpyDotNet::ndarray^ __pyx_v_n) { 1645c1645 < static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t *PyArray_DIMS(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t *PyArray_DIMS(NumpyDotNet::ndarray^ __pyx_v_n) { 1676c1676 < static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t PyArray_SIZE(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_3fft_5numpy_intp_t PyArray_SIZE(NumpyDotNet::ndarray^ __pyx_v_n) { 1704c1704 < * import Cascade::VTFA::Python::Numpy.NpyArray --- > * import NumpyDotNet.NpyArray 1709c1709 < System::Object^ __pyx_v_Numpy; --- > System::Object^ __pyx_v_NumpyDotNet; 1714c1714 < __pyx_v_Numpy = nullptr; --- > __pyx_v_NumpyDotNet = nullptr; 1720,1721c1720,1721 < * import Cascade::VTFA::Python::Numpy.NpyArray < * return Cascade::VTFA::Python::Numpy.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) --- > * import NumpyDotNet.NpyArray > * return NumpyDotNet.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) 1730,1731c1730,1731 < * import Cascade::VTFA::Python::Numpy.NpyArray # <<<<<<<<<<<<<< < * return Cascade::VTFA::Python::Numpy.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) --- > * import NumpyDotNet.NpyArray # <<<<<<<<<<<<<< > * return NumpyDotNet.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) 1734,1735c1734,1735 < __pyx_t_1 = LightExceptions::CheckAndThrow(PythonOps::ImportTop(__pyx_context, "Cascade::VTFA::Python::Numpy.NpyArray", -1)); < __pyx_v_Numpy = __pyx_t_1; --- > __pyx_t_1 = LightExceptions::CheckAndThrow(PythonOps::ImportTop(__pyx_context, "NumpyDotNet.NpyArray", -1)); > __pyx_v_NumpyDotNet = __pyx_t_1; 1740,1741c1740,1741 < * import Cascade::VTFA::Python::Numpy.NpyArray < * return Cascade::VTFA::Python::Numpy.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) # <<<<<<<<<<<<<< --- > * import NumpyDotNet.NpyArray > * return NumpyDotNet.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) # <<<<<<<<<<<<<< 1745c1745 < __pyx_t_1 = __site_get_NpyArray_229_22->Target(__site_get_NpyArray_229_22, __pyx_v_Numpy, __pyx_context); --- > __pyx_t_1 = __site_get_NpyArray_229_22->Target(__site_get_NpyArray_229_22, __pyx_v_NumpyDotNet, __pyx_context); 1760c1760 < * return Cascade::VTFA::Python::Numpy.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) --- > * return NumpyDotNet.NpyArray.FromAny(op, newtype, min_depth, max_depth, flags, context) 1964,1965c1964,1965 < // XXX skipping type ptr assignment for Cascade::VTFA::Python::Numpy::ndarray < // XXX skipping type ptr assignment for Cascade::VTFA::Python::Numpy::dtype --- > // XXX skipping type ptr assignment for NumpyDotNet::ndarray > // XXX skipping type ptr assignment for NumpyDotNet::dtype diff -r pure-numpy/src/numpy/fft/fft.vcxproj numpy-refactor/numpy/fft/fft.vcxproj 1,221c1,210 < ??? < < < < Debug < Win32 < < < Debug < x64 < < < Release < Win32 < < < Release < x64 < < < < Win32Proj < fftpack_lite < {8048AE7F-FE06-80A6-4504-372CCB3E7D5F} < SAK < SAK < SAK < SAK < < < < DynamicLibrary < true < true < false < v110 < < < DynamicLibrary < true < true < false < v110 < < < DynamicLibrary < false < true < v110 < < < DynamicLibrary < false < true < v110 < < < < < < < < < < < < < < < < < < < true < < < true < $(ProjectDir)bin < < < true < < < true < < < < WIN32;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < Level3 < ProgramDatabase < Disabled < ..\..\libndarray\src < false < Async < Default < true < < < MachineX86 < true < Windows < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < false < < < < < true < < < < < WIN32;_WIN64;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < Level3 < ProgramDatabase < Disabled < ..\..\libndarray\src < false < Async < Default < true < < < true < Windows < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(SolutionDir)\PythonNumPy\libndarray\windows\bin;%(AdditionalLibraryDirectories) < false < < < < < true < < < < < WIN32;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDLL < Level3 < ProgramDatabase < ..\..\libndarray\src;%(AdditionalIncludeDirectories) < < < MachineX86 < true < Windows < true < true < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < < < < < WIN32;_WIN64;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDLL < Level3 < ProgramDatabase < ..\..\libndarray\src;%(AdditionalIncludeDirectories) < < < true < Windows < true < true < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < < < < < < < < < < < < < < < < < < < < < false < false < false < false < < < true < true < true < true < < < true < true < true < true < < < < < < < < < {9d8fa516-085c-40b2-93ca-f3a419b2fced} < < < < < --- > ??? > > > > Debug > Win32 > > > Debug > x64 > > > Release > Win32 > > > Release > x64 > > > > Win32Proj > fftpack_lite > > > > DynamicLibrary > true > true > false > > > DynamicLibrary > true > true > false > > > DynamicLibrary > false > true > > > DynamicLibrary > false > true > > > > > > > > > > > > > > > > > > > true > > > true > > > true > > > true > > > > WIN32;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > Level3 > ProgramDatabase > Disabled > ..\..\libndarray\src > false > Async > Default > true > > > MachineX86 > true > Windows > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > false > > > > > true > > > > > WIN32;_WIN64;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > Level3 > ProgramDatabase > Disabled > ..\..\libndarray\src > false > Async > Default > true > > > true > Windows > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > false > > > > > true > > > > > WIN32;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDLL > Level3 > ProgramDatabase > ..\..\libndarray\src;%(AdditionalIncludeDirectories) > > > MachineX86 > true > Windows > true > true > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > > > > > WIN32;_WIN64;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;FFT_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDLL > Level3 > ProgramDatabase > ..\..\libndarray\src;%(AdditionalIncludeDirectories) > > > true > Windows > true > true > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > > > > > > > > > > > > > > > > > > > > false > false > false > false > > > true > true > true > true > > > true > true > true > true > > > > > > > > > {9d8fa516-085c-40b2-93ca-f3a419b2fced} > > > > > Only in pure-numpy/src/numpy/fft: fft.vcxproj.vspscc diff -r pure-numpy/src/numpy/fft/__init__.py numpy-refactor/numpy/fft/__init__.py 6a7,9 > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/fft: tests Only in pure-numpy/src/numpy/fft: x64 diff -r pure-numpy/src/numpy/__init__.py numpy-refactor/numpy/__init__.py 58a59,60 > random > Core Random Tools 64a67,73 > testing > Numpy testing tools > f2py > Fortran to Python Interface Generator. > distutils > Enhancements to distutils with support for > Fortran compilers support and more. 67a77,78 > test > Run numpy unittests 73a85,97 > __version__ > Numpy version string > > Viewing documentation using IPython > ----------------------------------- > Start IPython with the NumPy profile (``ipython -p numpy``), which will > import `numpy` under the alias `np`. Then, use the ``cpaste`` command to > paste examples into the shell. To see which functions are available in > `numpy`, type ``np.`` (where ```` refers to the TAB key), or use > ``np.*cos*?`` (where ```` refers to the ENTER key) to narrow > down the list. To view the docstring for a function, use > ``np.cos?`` (to view the docstring) and ``np.cos??`` (to view > the source code). 105,106c129,135 < < __version__ = "2.0.0" --- > try: > from version import git_revision as __git_revision__ > from version import version as __version__ > except: > print "Warning: version.py is missing, installation may be wrong." > __git_revision__ = "Unknown" > __version__ = "Unknown" 121a151,154 > from testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench > 128a162,164 > if sys.platform != 'cli': > import fft > import random diff -r pure-numpy/src/numpy/lib/__init__.py numpy-refactor/numpy/lib/__init__.py 1a2 > from numpy.version import version as __version__ 34a36,38 > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/lib: tests Only in pure-numpy/src/numpy/linalg: bin Only in pure-numpy/src/numpy/linalg: Debug diff -r pure-numpy/src/numpy/linalg/__init__.py numpy-refactor/numpy/linalg/__init__.py 48a49,52 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).test diff -r pure-numpy/src/numpy/linalg/lapack_lite.cpp numpy-refactor/numpy/linalg/lapack_lite.cpp 335c335 < static CYTHON_INLINE Cascade::VTFA::Python::Numpy::dtype^ NpyArray_FindArrayType_2args(System::Object^, Cascade::VTFA::Python::Numpy::dtype^); /*proto*/ --- > static CYTHON_INLINE NumpyDotNet::dtype^ NpyArray_FindArrayType_2args(System::Object^, NumpyDotNet::dtype^); /*proto*/ 340c340 < static CYTHON_INLINE System::Object^ PyArray_Empty(int, __pyx_t_5numpy_6linalg_5numpy_npy_intp *, Cascade::VTFA::Python::Numpy::dtype^, int); /*proto*/ --- > static CYTHON_INLINE System::Object^ PyArray_Empty(int, __pyx_t_5numpy_6linalg_5numpy_npy_intp *, NumpyDotNet::dtype^, int); /*proto*/ 344,348c344,348 < static CYTHON_INLINE int PyArray_CHKFLAGS(Cascade::VTFA::Python::Numpy::ndarray^, int); /*proto*/ < static CYTHON_INLINE void *PyArray_DATA(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t *PyArray_DIMS(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE System::Object^ PyArray_DESCR(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE int PyArray_ITEMSIZE(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ --- > static CYTHON_INLINE int PyArray_CHKFLAGS(NumpyDotNet::ndarray^, int); /*proto*/ > static CYTHON_INLINE void *PyArray_DATA(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t *PyArray_DIMS(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE System::Object^ PyArray_DESCR(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE int PyArray_ITEMSIZE(NumpyDotNet::ndarray^); /*proto*/ 350,356c350,356 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_DIM(Cascade::VTFA::Python::Numpy::ndarray^, int); /*proto*/ < static CYTHON_INLINE System::Object^ PyArray_NDIM(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_SIZE(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp *PyArray_STRIDES(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp PyArray_NBYTES(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE NpyArray *PyArray_ARRAY(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE int PyArray_TYPE(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_DIM(NumpyDotNet::ndarray^, int); /*proto*/ > static CYTHON_INLINE System::Object^ PyArray_NDIM(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_SIZE(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp *PyArray_STRIDES(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp PyArray_NBYTES(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE NpyArray *PyArray_ARRAY(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE int PyArray_TYPE(NumpyDotNet::ndarray^); /*proto*/ 359c359 < static CYTHON_INLINE int PyDataType_TYPE_NUM(Cascade::VTFA::Python::Numpy::dtype^); /*proto*/ --- > static CYTHON_INLINE int PyDataType_TYPE_NUM(NumpyDotNet::dtype^); /*proto*/ 370,371c370,371 < static CYTHON_INLINE NpyArrayIterObject *PyArray_IterNew(Cascade::VTFA::Python::Numpy::ndarray^); /*proto*/ < static CYTHON_INLINE NpyArrayIterObject *PyArray_IterAllButAxis(Cascade::VTFA::Python::Numpy::ndarray^, int *); /*proto*/ --- > static CYTHON_INLINE NpyArrayIterObject *PyArray_IterNew(NumpyDotNet::ndarray^); /*proto*/ > static CYTHON_INLINE NpyArrayIterObject *PyArray_IterAllButAxis(NumpyDotNet::ndarray^, int *); /*proto*/ 378c378 < static CYTHON_INLINE Cascade::VTFA::Python::Numpy::ndarray^ NpyIter_ARRAY(NpyArrayIterObject *); /*proto*/ --- > static CYTHON_INLINE NumpyDotNet::ndarray^ NpyIter_ARRAY(NpyArrayIterObject *); /*proto*/ 380c380 < static int check_object(Cascade::VTFA::Python::Numpy::ndarray^, int, char *, char *, char *); /*proto*/ --- > static int check_object(NumpyDotNet::ndarray^, int, char *, char *, char *); /*proto*/ 786c786 < static int check_object(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ob, int __pyx_v_t, char *__pyx_v_obname, char *__pyx_v_tname, char *__pyx_v_funname) { --- > static int check_object(NumpyDotNet::ndarray^ __pyx_v_ob, int __pyx_v_t, char *__pyx_v_obname, char *__pyx_v_tname, char *__pyx_v_funname) { 944c944 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 946,948c946,948 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_wr = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_wi = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vl = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_wr = nullptr; > NumpyDotNet::ndarray^ __pyx_v_wi = nullptr; > NumpyDotNet::ndarray^ __pyx_v_vl = nullptr; 950c950 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vr = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_vr = nullptr; 952c952 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 970c970 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 972,974c972,974 < __pyx_v_wr = ((Cascade::VTFA::Python::Numpy::ndarray^)wr); < __pyx_v_wi = ((Cascade::VTFA::Python::Numpy::ndarray^)wi); < __pyx_v_vl = ((Cascade::VTFA::Python::Numpy::ndarray^)vl); --- > __pyx_v_wr = ((NumpyDotNet::ndarray^)wr); > __pyx_v_wi = ((NumpyDotNet::ndarray^)wi); > __pyx_v_vl = ((NumpyDotNet::ndarray^)vl); 976c976 < __pyx_v_vr = ((Cascade::VTFA::Python::Numpy::ndarray^)vr); --- > __pyx_v_vr = ((NumpyDotNet::ndarray^)vr); 978c978 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 982c982 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 985c985 < if (unlikely(dynamic_cast(__pyx_v_wr) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_wr) == nullptr)) { 988c988 < if (unlikely(dynamic_cast(__pyx_v_wi) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_wi) == nullptr)) { 991c991 < if (unlikely(dynamic_cast(__pyx_v_vl) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vl) == nullptr)) { 994c994 < if (unlikely(dynamic_cast(__pyx_v_vr) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vr) == nullptr)) { 997c997 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 1269c1269 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 1271,1272c1271,1272 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_w = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_w = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 1274c1274 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 1292c1292 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 1294,1295c1294,1295 < __pyx_v_w = ((Cascade::VTFA::Python::Numpy::ndarray^)w); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_w = ((NumpyDotNet::ndarray^)w); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 1297c1297 < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 1301c1301 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 1304c1304 < if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { 1307c1307 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 1310c1310 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 1541c1541 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 1543,1544c1543,1544 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_w = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_w = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 1546c1546 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_rwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_rwork = nullptr; 1548c1548 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 1566c1566 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 1568,1569c1568,1569 < __pyx_v_w = ((Cascade::VTFA::Python::Numpy::ndarray^)w); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_w = ((NumpyDotNet::ndarray^)w); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 1571c1571 < __pyx_v_rwork = ((Cascade::VTFA::Python::Numpy::ndarray^)rwork); --- > __pyx_v_rwork = ((NumpyDotNet::ndarray^)rwork); 1573c1573 < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 1577c1577 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 1580c1580 < if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { 1583c1583 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 1586c1586 < if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { 1589c1589 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 1846c1846 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 1848c1848 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_b = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_b = nullptr; 1850c1850 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_s = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_s = nullptr; 1853c1853 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 1855c1855 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 1866c1866 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 1868c1868 < __pyx_v_b = ((Cascade::VTFA::Python::Numpy::ndarray^)b); --- > __pyx_v_b = ((NumpyDotNet::ndarray^)b); 1870c1870 < __pyx_v_s = ((Cascade::VTFA::Python::Numpy::ndarray^)s); --- > __pyx_v_s = ((NumpyDotNet::ndarray^)s); 1873c1873 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 1875c1875 < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 1878c1878 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 1881c1881 < if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { 1884c1884 < if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { 1887c1887 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 1890c1890 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 2125c2125 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 2127,2128c2127,2128 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ipiv = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_b = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_ipiv = nullptr; > NumpyDotNet::ndarray^ __pyx_v_b = nullptr; 2139c2139 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 2141,2142c2141,2142 < __pyx_v_ipiv = ((Cascade::VTFA::Python::Numpy::ndarray^)ipiv); < __pyx_v_b = ((Cascade::VTFA::Python::Numpy::ndarray^)b); --- > __pyx_v_ipiv = ((NumpyDotNet::ndarray^)ipiv); > __pyx_v_b = ((NumpyDotNet::ndarray^)b); 2146c2146 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 2149c2149 < if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { 2152c2152 < if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { 2314c2314 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 2316,2317c2316,2317 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_s = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_u = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_s = nullptr; > NumpyDotNet::ndarray^ __pyx_v_u = nullptr; 2319c2319 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vt = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_vt = nullptr; 2321c2321 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 2323c2323 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 2350c2350 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 2352,2353c2352,2353 < __pyx_v_s = ((Cascade::VTFA::Python::Numpy::ndarray^)s); < __pyx_v_u = ((Cascade::VTFA::Python::Numpy::ndarray^)u); --- > __pyx_v_s = ((NumpyDotNet::ndarray^)s); > __pyx_v_u = ((NumpyDotNet::ndarray^)u); 2355c2355 < __pyx_v_vt = ((Cascade::VTFA::Python::Numpy::ndarray^)vt); --- > __pyx_v_vt = ((NumpyDotNet::ndarray^)vt); 2357c2357 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 2359c2359 < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 2362c2362 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 2365c2365 < if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { 2368c2368 < if (unlikely(dynamic_cast(__pyx_v_u) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_u) == nullptr)) { 2371c2371 < if (unlikely(dynamic_cast(__pyx_v_vt) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vt) == nullptr)) { 2374c2374 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 2377c2377 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 2821c2821 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 2823c2823 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ipiv = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_ipiv = nullptr; 2833c2833 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 2835c2835 < __pyx_v_ipiv = ((Cascade::VTFA::Python::Numpy::ndarray^)ipiv); --- > __pyx_v_ipiv = ((NumpyDotNet::ndarray^)ipiv); 2838c2838 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 2841c2841 < if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { 2976c2976 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 2991c2991 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 2995c2995 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 3120c3120 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 3122,3123c3122,3123 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_tau = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_tau = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 3134c3134 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 3136,3137c3136,3137 < __pyx_v_tau = ((Cascade::VTFA::Python::Numpy::ndarray^)tau); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_tau = ((NumpyDotNet::ndarray^)tau); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 3141c3141 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 3144c3144 < if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { 3147c3147 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 3309c3309 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 3311,3312c3311,3312 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_tau = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_tau = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 3324c3324 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 3326,3327c3326,3327 < __pyx_v_tau = ((Cascade::VTFA::Python::Numpy::ndarray^)tau); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_tau = ((NumpyDotNet::ndarray^)tau); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 3331c3331 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 3334c3334 < if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { 3337c3337 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 3455c3455 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 3457,3458c3457,3458 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_w = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vl = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_w = nullptr; > NumpyDotNet::ndarray^ __pyx_v_vl = nullptr; 3460c3460 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vr = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_vr = nullptr; 3462c3462 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 3464c3464 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_rwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_rwork = nullptr; 3481c3481 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 3483,3484c3483,3484 < __pyx_v_w = ((Cascade::VTFA::Python::Numpy::ndarray^)w); < __pyx_v_vl = ((Cascade::VTFA::Python::Numpy::ndarray^)vl); --- > __pyx_v_w = ((NumpyDotNet::ndarray^)w); > __pyx_v_vl = ((NumpyDotNet::ndarray^)vl); 3486c3486 < __pyx_v_vr = ((Cascade::VTFA::Python::Numpy::ndarray^)vr); --- > __pyx_v_vr = ((NumpyDotNet::ndarray^)vr); 3488c3488 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 3490c3490 < __pyx_v_rwork = ((Cascade::VTFA::Python::Numpy::ndarray^)rwork); --- > __pyx_v_rwork = ((NumpyDotNet::ndarray^)rwork); 3493c3493 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 3496c3496 < if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_w) == nullptr)) { 3499c3499 < if (unlikely(dynamic_cast(__pyx_v_vl) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vl) == nullptr)) { 3502c3502 < if (unlikely(dynamic_cast(__pyx_v_vr) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vr) == nullptr)) { 3505c3505 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 3508c3508 < if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { 3780c3780 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 3782c3782 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_b = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_b = nullptr; 3784c3784 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_s = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_s = nullptr; 3787c3787 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 3789,3790c3789,3790 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_rwork = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_rwork = nullptr; > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 3801c3801 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 3803c3803 < __pyx_v_b = ((Cascade::VTFA::Python::Numpy::ndarray^)b); --- > __pyx_v_b = ((NumpyDotNet::ndarray^)b); 3805c3805 < __pyx_v_s = ((Cascade::VTFA::Python::Numpy::ndarray^)s); --- > __pyx_v_s = ((NumpyDotNet::ndarray^)s); 3808c3808 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 3810,3811c3810,3811 < __pyx_v_rwork = ((Cascade::VTFA::Python::Numpy::ndarray^)rwork); < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_rwork = ((NumpyDotNet::ndarray^)rwork); > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 3814c3814 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 3817c3817 < if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { 3820c3820 < if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { 3823c3823 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 3826c3826 < if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { 3829c3829 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 4068c4068 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 4070,4071c4070,4071 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ipiv = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_b = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_ipiv = nullptr; > NumpyDotNet::ndarray^ __pyx_v_b = nullptr; 4082c4082 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 4084,4085c4084,4085 < __pyx_v_ipiv = ((Cascade::VTFA::Python::Numpy::ndarray^)ipiv); < __pyx_v_b = ((Cascade::VTFA::Python::Numpy::ndarray^)b); --- > __pyx_v_ipiv = ((NumpyDotNet::ndarray^)ipiv); > __pyx_v_b = ((NumpyDotNet::ndarray^)b); 4089c4089 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 4092c4092 < if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { 4095c4095 < if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_b) == nullptr)) { 4257c4257 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 4259,4260c4259,4260 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_s = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_u = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_s = nullptr; > NumpyDotNet::ndarray^ __pyx_v_u = nullptr; 4262c4262 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_vt = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_vt = nullptr; 4264c4264 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 4266,4267c4266,4267 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_rwork = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_iwork = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_rwork = nullptr; > NumpyDotNet::ndarray^ __pyx_v_iwork = nullptr; 4282c4282 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 4284,4285c4284,4285 < __pyx_v_s = ((Cascade::VTFA::Python::Numpy::ndarray^)s); < __pyx_v_u = ((Cascade::VTFA::Python::Numpy::ndarray^)u); --- > __pyx_v_s = ((NumpyDotNet::ndarray^)s); > __pyx_v_u = ((NumpyDotNet::ndarray^)u); 4287c4287 < __pyx_v_vt = ((Cascade::VTFA::Python::Numpy::ndarray^)vt); --- > __pyx_v_vt = ((NumpyDotNet::ndarray^)vt); 4289c4289 < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 4291,4292c4291,4292 < __pyx_v_rwork = ((Cascade::VTFA::Python::Numpy::ndarray^)rwork); < __pyx_v_iwork = ((Cascade::VTFA::Python::Numpy::ndarray^)iwork); --- > __pyx_v_rwork = ((NumpyDotNet::ndarray^)rwork); > __pyx_v_iwork = ((NumpyDotNet::ndarray^)iwork); 4295c4295 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 4298c4298 < if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_s) == nullptr)) { 4301c4301 < if (unlikely(dynamic_cast(__pyx_v_u) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_u) == nullptr)) { 4304c4304 < if (unlikely(dynamic_cast(__pyx_v_vt) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_vt) == nullptr)) { 4307c4307 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 4310c4310 < if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_rwork) == nullptr)) { 4313c4313 < if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_iwork) == nullptr)) { 4583c4583 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 4585c4585 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ipiv = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_ipiv = nullptr; 4595c4595 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 4597c4597 < __pyx_v_ipiv = ((Cascade::VTFA::Python::Numpy::ndarray^)ipiv); --- > __pyx_v_ipiv = ((NumpyDotNet::ndarray^)ipiv); 4600c4600 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 4603c4603 < if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_ipiv) == nullptr)) { 4738c4738 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 4753c4753 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 4757c4757 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 4882c4882 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 4884,4885c4884,4885 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_tau = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_tau = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 4896c4896 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 4898,4899c4898,4899 < __pyx_v_tau = ((Cascade::VTFA::Python::Numpy::ndarray^)tau); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_tau = ((NumpyDotNet::ndarray^)tau); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 4903c4903 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 4906c4906 < if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { 4909c4909 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 5071c5071 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_a = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_a = nullptr; 5073,5074c5073,5074 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_tau = nullptr; < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_work = nullptr; --- > NumpyDotNet::ndarray^ __pyx_v_tau = nullptr; > NumpyDotNet::ndarray^ __pyx_v_work = nullptr; 5086c5086 < __pyx_v_a = ((Cascade::VTFA::Python::Numpy::ndarray^)a); --- > __pyx_v_a = ((NumpyDotNet::ndarray^)a); 5088,5089c5088,5089 < __pyx_v_tau = ((Cascade::VTFA::Python::Numpy::ndarray^)tau); < __pyx_v_work = ((Cascade::VTFA::Python::Numpy::ndarray^)work); --- > __pyx_v_tau = ((NumpyDotNet::ndarray^)tau); > __pyx_v_work = ((NumpyDotNet::ndarray^)work); 5093c5093 < if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_a) == nullptr)) { 5096c5096 < if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_tau) == nullptr)) { 5099c5099 < if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { --- > if (unlikely(dynamic_cast(__pyx_v_work) == nullptr)) { 5205c5205 < * dtype NpyArray_FindArrayType_3args "Cascade::VTFA::Python::Numpy::NpyArray::FindArrayType" (object src, dtype minitype, int max) --- > * dtype NpyArray_FindArrayType_3args "NumpyDotNet::NpyArray::FindArrayType" (object src, dtype minitype, int max) 5212,5213c5212,5213 < static CYTHON_INLINE Cascade::VTFA::Python::Numpy::dtype^ NpyArray_FindArrayType_2args(System::Object^ __pyx_v_src, Cascade::VTFA::Python::Numpy::dtype^ __pyx_v_minitype) { < Cascade::VTFA::Python::Numpy::dtype^ __pyx_r = nullptr; --- > static CYTHON_INLINE NumpyDotNet::dtype^ NpyArray_FindArrayType_2args(System::Object^ __pyx_v_src, NumpyDotNet::dtype^ __pyx_v_minitype) { > NumpyDotNet::dtype^ __pyx_r = nullptr; 5223,5224c5223,5224 < __pyx_t_1 = ((System::Object^)Cascade::VTFA::Python::Numpy::NpyArray::FindArrayType(__pyx_v_src, __pyx_v_minitype, NPY_MAXDIMS)); < __pyx_r = ((Cascade::VTFA::Python::Numpy::dtype^)__pyx_t_1); --- > __pyx_t_1 = ((System::Object^)NumpyDotNet::NpyArray::FindArrayType(__pyx_v_src, __pyx_v_minitype, NPY_MAXDIMS)); > __pyx_r = ((NumpyDotNet::dtype^)__pyx_t_1); 5493c5493 < static CYTHON_INLINE System::Object^ PyArray_Empty(int __pyx_v_nd, __pyx_t_5numpy_6linalg_5numpy_npy_intp *__pyx_v_dims, Cascade::VTFA::Python::Numpy::dtype^ __pyx_v_descr, int __pyx_v_fortran) { --- > static CYTHON_INLINE System::Object^ PyArray_Empty(int __pyx_v_nd, __pyx_t_5numpy_6linalg_5numpy_npy_intp *__pyx_v_dims, NumpyDotNet::dtype^ __pyx_v_descr, int __pyx_v_fortran) { 5700c5700 < static CYTHON_INLINE int PyArray_CHKFLAGS(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n, int __pyx_v_flags) { --- > static CYTHON_INLINE int PyArray_CHKFLAGS(NumpyDotNet::ndarray^ __pyx_v_n, int __pyx_v_flags) { 5731c5731 < static CYTHON_INLINE void *PyArray_DATA(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE void *PyArray_DATA(NumpyDotNet::ndarray^ __pyx_v_n) { 5762c5762 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t *PyArray_DIMS(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t *PyArray_DIMS(NumpyDotNet::ndarray^ __pyx_v_n) { 5793c5793 < static CYTHON_INLINE System::Object^ PyArray_DESCR(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE System::Object^ PyArray_DESCR(NumpyDotNet::ndarray^ __pyx_v_n) { 5826c5826 < static CYTHON_INLINE int PyArray_ITEMSIZE(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE int PyArray_ITEMSIZE(NumpyDotNet::ndarray^ __pyx_v_n) { 5941c5941 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_DIM(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n, int __pyx_v_dim) { --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_DIM(NumpyDotNet::ndarray^ __pyx_v_n, int __pyx_v_dim) { 5972c5972 < static CYTHON_INLINE System::Object^ PyArray_NDIM(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_obj) { --- > static CYTHON_INLINE System::Object^ PyArray_NDIM(NumpyDotNet::ndarray^ __pyx_v_obj) { 6001c6001 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_SIZE(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_intp_t PyArray_SIZE(NumpyDotNet::ndarray^ __pyx_v_n) { 6032c6032 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp *PyArray_STRIDES(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp *PyArray_STRIDES(NumpyDotNet::ndarray^ __pyx_v_n) { 6063c6063 < static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp PyArray_NBYTES(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE __pyx_t_5numpy_6linalg_5numpy_npy_intp PyArray_NBYTES(NumpyDotNet::ndarray^ __pyx_v_n) { 6094c6094 < static CYTHON_INLINE NpyArray *PyArray_ARRAY(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE NpyArray *PyArray_ARRAY(NumpyDotNet::ndarray^ __pyx_v_n) { 6125c6125 < static CYTHON_INLINE int PyArray_TYPE(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE int PyArray_TYPE(NumpyDotNet::ndarray^ __pyx_v_n) { 6219c6219 < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_ret; --- > NumpyDotNet::ndarray^ __pyx_v_ret; 6232c6232 < __pyx_v_ret = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_v_ret = ((NumpyDotNet::ndarray^)__pyx_t_1); 6267c6267 < static CYTHON_INLINE int PyDataType_TYPE_NUM(Cascade::VTFA::Python::Numpy::dtype^ __pyx_v_t) { --- > static CYTHON_INLINE int PyDataType_TYPE_NUM(NumpyDotNet::dtype^ __pyx_v_t) { 6838c6838 < static CYTHON_INLINE NpyArrayIterObject *PyArray_IterNew(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n) { --- > static CYTHON_INLINE NpyArrayIterObject *PyArray_IterNew(NumpyDotNet::ndarray^ __pyx_v_n) { 6869c6869 < static CYTHON_INLINE NpyArrayIterObject *PyArray_IterAllButAxis(Cascade::VTFA::Python::Numpy::ndarray^ __pyx_v_n, int *__pyx_v_inaxis) { --- > static CYTHON_INLINE NpyArrayIterObject *PyArray_IterAllButAxis(NumpyDotNet::ndarray^ __pyx_v_n, int *__pyx_v_inaxis) { 7045,7046c7045,7046 < static CYTHON_INLINE Cascade::VTFA::Python::Numpy::ndarray^ NpyIter_ARRAY(NpyArrayIterObject *__pyx_v_iter) { < Cascade::VTFA::Python::Numpy::ndarray^ __pyx_r = nullptr; --- > static CYTHON_INLINE NumpyDotNet::ndarray^ NpyIter_ARRAY(NpyArrayIterObject *__pyx_v_iter) { > NumpyDotNet::ndarray^ __pyx_r = nullptr; 7055c7055 < __pyx_r = ((Cascade::VTFA::Python::Numpy::ndarray^)__pyx_t_1); --- > __pyx_r = ((NumpyDotNet::ndarray^)__pyx_t_1); 7436,7437c7436,7437 < // XXX skipping type ptr assignment for Cascade::VTFA::Python::Numpy::ndarray < // XXX skipping type ptr assignment for Cascade::VTFA::Python::Numpy::dtype --- > // XXX skipping type ptr assignment for NumpyDotNet::ndarray > // XXX skipping type ptr assignment for NumpyDotNet::dtype diff -r pure-numpy/src/numpy/linalg/lapack_lite.vcxproj numpy-refactor/numpy/linalg/lapack_lite.vcxproj 1,242c1,231 < ??? < < < < Debug < Win32 < < < Debug < x64 < < < Release < Win32 < < < Release < x64 < < < < Win32Proj < lapack_lite < {0BFE2D51-BB88-6319-E3C5-35F6D203AFCD} < SAK < SAK < SAK < SAK < < < < DynamicLibrary < true < true < false < v110 < < < DynamicLibrary < true < true < false < v110 < < < DynamicLibrary < false < true < v110 < < < DynamicLibrary < false < true < v110 < < < < < < < < < < < < < < < < < < < true < < < true < $(ProjectDir)bin < < < true < < < true < < < < WIN32;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < Level3 < ProgramDatabase < Disabled < ..\..\libndarray\src < false < Async < Default < true < < < MachineX86 < true < Windows < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < false < < < < < true < < < < < WIN32;_WIN64;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < Level3 < ProgramDatabase < Disabled < ..\..\libndarray\src < false < Async < Default < true < < < true < Windows < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(SolutionDir)\PythonNumPy\libndarray\windows\bin;%(AdditionalLibraryDirectories) < false < < < < < true < < < < < WIN32;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDLL < Level3 < ProgramDatabase < ..\..\libndarray\src;%(AdditionalIncludeDirectories) < true < < < MachineX86 < true < Windows < true < true < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < < < < < WIN32;_WIN64;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) < MultiThreadedDLL < Level3 < ProgramDatabase < ..\..\libndarray\src;%(AdditionalIncludeDirectories) < true < < < true < Windows < true < true < kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) < $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) < < < < < < < < < < < < < < < < < < < < < < < < < false < false < false < false < < < false < false < false < false < < < false < false < false < false < < < false < false < false < false < < < < false < false < false < false < < < false < false < false < false < < < < < {9d8fa516-085c-40b2-93ca-f3a419b2fced} < < < < < --- > ??? > > > > Debug > Win32 > > > Debug > x64 > > > Release > Win32 > > > Release > x64 > > > > Win32Proj > lapack_lite > > > > DynamicLibrary > true > true > false > > > DynamicLibrary > true > true > false > > > DynamicLibrary > false > true > > > DynamicLibrary > false > true > > > > > > > > > > > > > > > > > > > true > > > true > > > true > > > true > > > > WIN32;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > Level3 > ProgramDatabase > Disabled > ..\..\libndarray\src > false > Async > Default > true > > > MachineX86 > true > Windows > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > false > > > > > true > > > > > WIN32;_WIN64;_DEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > Level3 > ProgramDatabase > Disabled > ..\..\libndarray\src > false > Async > Default > true > > > true > Windows > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > false > > > > > true > > > > > WIN32;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDLL > Level3 > ProgramDatabase > ..\..\libndarray\src;%(AdditionalIncludeDirectories) > true > > > MachineX86 > true > Windows > true > true > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > > > > > WIN32;_WIN64;NDEBUG;_WINDOWS;_USRDLL;NO_CPYTHON;LAPACK_LITE_EXPORTS;%(PreprocessorDefinitions) > MultiThreadedDLL > Level3 > ProgramDatabase > ..\..\libndarray\src;%(AdditionalIncludeDirectories) > true > > > true > Windows > true > true > kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;ndarray.lib;%(AdditionalDependencies) > $(OutDir);..\..\..\numpy-refactor\numpy\NumpyDotNet\bin;%(AdditionalLibraryDirectories) > > > > > > > > > > > > > > > > > > > > > > > > false > false > false > false > > > false > false > false > false > > > false > false > false > false > > > false > false > false > false > > > > false > false > false > false > > > false > false > false > false > > > > > {9d8fa516-085c-40b2-93ca-f3a419b2fced} > > > > > Only in pure-numpy/src/numpy/linalg: lapack_lite.vcxproj.vspscc Only in numpy-refactor/numpy/linalg: tests Only in pure-numpy/src/numpy/linalg: x64 diff -r pure-numpy/src/numpy/ma/__init__.py numpy-refactor/numpy/ma/__init__.py 52a53,56 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/ma: tests diff -r pure-numpy/src/numpy/matrixlib/__init__.py numpy-refactor/numpy/matrixlib/__init__.py 4a5,8 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/matrixlib: tests diff -r pure-numpy/src/numpy/numarray/__init__.py numpy-refactor/numpy/numarray/__init__.py 26a27,30 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in pure-numpy/src/numpy/NumpyDotNet: bin diff -r pure-numpy/src/numpy/NumpyDotNet/broadcast.cs numpy-refactor/numpy/NumpyDotNet/broadcast.cs 9c9 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/calculation.cs numpy-refactor/numpy/NumpyDotNet/calculation.cs 9c9 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/CompiledBase.cs numpy-refactor/numpy/NumpyDotNet/CompiledBase.cs 14,17c14,15 < [assembly: PythonModule("_compiled_base", typeof(Cascade.VTFA.Python.Numpy.CompiledBase))] < < namespace Cascade.VTFA.Python.Numpy < { --- > [assembly: PythonModule("_compiled_base", typeof(NumpyDotNet.CompiledBase))] > namespace NumpyDotNet { 21,22c19 < public static class CompiledBase < { --- > public static class CompiledBase { diff -r pure-numpy/src/numpy/NumpyDotNet/convert.cs numpy-refactor/numpy/NumpyDotNet/convert.cs 9c9 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/dtype.cs numpy-refactor/numpy/NumpyDotNet/dtype.cs 11a12 > using NumpyDotNet; 13,14c14 < namespace Cascade.VTFA.Python.Numpy < { --- > namespace NumpyDotNet { diff -r pure-numpy/src/numpy/NumpyDotNet/flagsobj.cs numpy-refactor/numpy/NumpyDotNet/flagsobj.cs 8c8 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/flatiter.cs numpy-refactor/numpy/NumpyDotNet/flatiter.cs 10c10 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/IArray.cs numpy-refactor/numpy/NumpyDotNet/IArray.cs 6c6 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/item_selection.cs numpy-refactor/numpy/NumpyDotNet/item_selection.cs 9c9 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/ModuleMethods.cs numpy-refactor/numpy/NumpyDotNet/ModuleMethods.cs 14c14 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { diff -r pure-numpy/src/numpy/NumpyDotNet/ndarray.cs numpy-refactor/numpy/NumpyDotNet/ndarray.cs 1,2507c1,2502 < ???using System; < using System.Collections; < using System.Collections.Generic; < using System.Linq; < using System.Text; < using System.Runtime.InteropServices; < using System.Runtime.CompilerServices; < using System.Reflection; < using System.Numerics; < using IronPython.Modules; < using IronPython.Runtime; < using IronPython.Runtime.Operations; < using IronPython.Runtime.Types; < using IronPython.Runtime.Exceptions; < using Microsoft.Scripting; < < < < namespace Cascade.VTFA.Python.Numpy < { < /// < /// Implements the Numpy python 'ndarray' object and acts as an interface to < /// the core NpyArray data structure. Npy_INTERFACE(NpyArray *) points an < /// instance of this class. < /// < [PythonType] < // ReSharper disable once InconsistentNaming < public partial class ndarray : Wrapper, IBufferProvider, IArray < { < public const string __module__ = "numpy"; < < public static ndarray __new__(CodeContext cntx, PythonType cls, < object shape, object dtype = null, < object buffer = null, object offset = null, < object strides = null, object order = null) { < ndarray result = (ndarray)ObjectOps.__new__(cntx, cls); < result.Construct(cntx, shape, dtype, buffer, offset, strides, order); < return result; < } < < internal void Construct(CodeContext cntx, object shape, object dtype = null, < object buffer = null, object offset = null, < object strides = null, object order = null) < { < dtype type = null; < < core = IntPtr.Zero; < < long[] aShape = NpyUtil_ArgProcessing.IntArrConverter(shape); < < if (dtype != null) < { < type = NpyDescr.DescrConverter(cntx, dtype); < } < < if (buffer != null) < throw new NotImplementedException("Buffer support is not implemented."); < < long loffset = NpyUtil_ArgProcessing.IntConverter(offset); < long[] aStrides = NpyUtil_ArgProcessing.IntArrConverter(strides); < NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); < < if (type == null) < type = NpyCoreApi.DescrFromType(NpyDefs.DefaultType); < < int itemsize = type.ElementSize; < if (itemsize == 0) { < throw new ArgumentException("data-type with unspecified variable length"); < } < < if (aStrides != null) { < if (aStrides.Length != aShape.Length) { < throw new ArgumentException("strides, if given, must be the same length as shape"); < } < < if (!NpyArray.CheckStrides(itemsize, aShape, aStrides)) { < throw new ArgumentException("strides is compatible with shape of requested array and size of buffer"); < } < } < < // Creates a new array object. By passing 'this' in the current instance < // becomes the wrapper object for the new array. < ndarray wrap = NpyCoreApi.NewFromDescr(type, aShape, aStrides, 0, < new NpyCoreApi.UseExistingWrapper { Wrapper = this }); < if (wrap != this) { < throw new InvalidOperationException("Internal error: returned array wrapper is different than current instance."); < } < // NOTE: CPython fills object arrays with Py_None here. We don't < // need to do this since None is null and the arrays are zero filled. < } < < protected override void Dispose(bool disposing) { < if (core != IntPtr.Zero) { < lock (this) { < if (reservedMemPressure > 0) < DecreaseMemoryPressure(reservedMemPressure); < base.Dispose(disposing); < } < } < } < < /// < /// Danger! This method is only intended to be used indirectly during construction < /// when the new instance is passed into the core as the 'interfaceData' field so < /// ArrayNewWrapper can pair up this instance with a core object. If this pointer < /// is changed after pairing, bad things can happen. < /// < /// Core object to be paired with this wrapper < internal void SetArray(IntPtr a) { < if (core == null) { < throw new InvalidOperationException("Attempt to change core array object for already-constructed wrapper."); < } < core = a; < } < < < #region Public interfaces (must match CPython) < < private static Func reprFunction; < private static Func strFunction; < < /// < /// Sets a function to be triggered for the repr() operator or null to default to the < /// built-in version. < /// < public static Func ReprFunction { < get { return reprFunction; } < internal set { reprFunction = (value != null) ? value : x => x.BuildStringRepr(true); } < } < < /// < /// Sets a function to be triggered on the str() operator or ToString() method. Null defaults to < /// the built-in version. < /// < public static Func StrFunction { < get { return strFunction; } < internal set { strFunction = (value != null) ? value : x => x.BuildStringRepr(false); } < } < < static ndarray() { < ReprFunction = null; < StrFunction = null; < } < < #region Python methods < < public virtual string __repr__(CodeContext cntx) { < return ReprFunction(this); < } < < public virtual string __str__(CodeContext cntx) { < return StrFunction(this); < } < < public virtual object __reduce__(CodeContext cntx, object notused=null) { < const int version = 1; < < // Result is a tuple of (callable object, arguments, object's state). < object[] ret = new object[3]; < ret[0] = NpyUtil_Python.GetModuleAttr(cntx, "numpy.core.multiarray", "_reconstruct"); < if (ret[0] == null) return null; < < ret[1] = PythonOps.MakeTuple(DynamicHelpers.GetPythonType(this), PythonOps.MakeTuple(0), "b"); < < // Fill in the object's state. This is a tuple with 5 argumentS: < // 1) an integer with the pickle version < // 2) a Tuple giving the shape < // 3) a dtype object with the correct byteorder set < // 4) a Bool stating if Fortran or not < // 5) a Python object representing the data (a string or list or something) < object[] state = new object[5]; < state[0] = version; < state[1] = this.shape; < state[2] = this.Dtype; < state[3] = this.IsFortran; < state[4] = Dtype.ChkFlags(NpyDefs.NPY_LIST_PICKLE) ? GetPickleList() : ToBytes(); < < ret[2] = new PythonTuple(state); < return new PythonTuple(ret); < } < < < /// < /// Generates a string containing the byte representation of the array. This is quite < /// inefficient as the string (being 16-bit unicode) is twice the size needed, but this < /// is what the pickler uses. Ugh. < /// < /// Desired output order, default is array's current order < /// String containing data bytes < private String ToBytes(NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_ANYORDER) { < if (order == NpyDefs.NPY_ORDER.NPY_ANYORDER) { < order = IsFortran ? NpyDefs.NPY_ORDER.NPY_FORTRANORDER : NpyDefs.NPY_ORDER.NPY_CORDER; < } < < long size = itemsize * Size; < if (size >= Int32.MaxValue) { < throw new NotImplementedException("Total array size exceeds 2GB limit imposed by .NET string size, unable to pickle array."); < } < < string result; < if (IsContiguous && order == NpyDefs.NPY_ORDER.NPY_CORDER || < IsFortran && order == NpyDefs.NPY_ORDER.NPY_FORTRANORDER) { < unsafe { < result = new string((sbyte*)UnsafeAddress, 0, (int)size); < } < } else { < // TODO: Implementation requires some thought to implement to try to avoid making multiple copies of < // the data. The issue is that we have to return a string. We can allocate a string of the appropriate < // size, but it is immutable. StringBuilder works, but we end up copying. Can do it in C, but end up < // copying in C, then copy into String. Ugh. < throw new NotImplementedException("Pickling of non-contiguous arrays or transposing arrays is not supported"); < } < return result; < } < < private object GetPickleList() { < List list = new List(); < for (flatiter iter = this.Flat; iter.MoveNext(); list.append(iter.Current)) ; < return list; < } < < public virtual object __setstate__(PythonTuple t) { < if (t.Count == 4) { < return __setstate__(0, (PythonTuple)t[0], (dtype)t[1], t[2], t[3]); < } else if (t.Count == 5) { < return __setstate__((int)t[0], (PythonTuple)t[1], (dtype)t[2], t[3], t[4]); < } else { < throw new NotImplementedException( < String.Format("Unhandled pickle format with {0} arguments.", t.Count)); < } < } < < < /// < /// Duplicates the array, performing a deepcopy of the array and all contained objects. < /// < /// Passed to the Python copy.deepcopy() routine < /// duplicated array < public object __deepcopy__(object visit) { < ndarray ret = this.Copy(); < if (ret.Dtype.IsObject) { < IntPtr optr = ret.UnsafeAddress; < flatiter it = this.Flat; < while (it.MoveNext()) { < deepcopy_call(it.CurrentPtr, optr, this, this.Dtype, visit); < optr = optr + this.Dtype.itemsize; < } < } < return ret; < } < < < /// < /// Recursive function to copy object element, even when the element is a record with < /// fields containing objects. < /// < /// Pointer to the start of the input element < /// Pointer to the destination element < /// Source array < /// Element type descriptor < /// Passed to Python copy.deepcopy() function < private void deepcopy_call(IntPtr iptr, IntPtr optr, ndarray arr, dtype type, object visit) { < if (type.IsObject) { < if (type.HasNames) { < // Check each field and recursively process any that contain object references. < PythonDictionary fields = Dtype.Fields; < foreach (KeyValuePair i in fields) { < string key = (string)i.Key; < PythonTuple value = (PythonTuple)i.Value; < if (value.Count == 3 && (string)value[2] == key) continue; < < dtype subtype = (dtype)value[0]; < int offset = (int)value[1]; < < deepcopy_call(iptr + offset, optr + offset, arr, subtype, visit); < } < } else { < object current = type.f.GetItem((long)iptr - (long)arr.UnsafeAddress, arr); < object copy = NpyUtil_Python.CallFunction(NpyUtil_Python.DefaultContext, "copy", "deepcopy", < current, visit); < IntPtr otemp = Marshal.ReadIntPtr(optr); < if (otemp != IntPtr.Zero) { < NpyCoreApi.FreeGCHandle( NpyCoreApi.GCHandleFromIntPtr(otemp) ); < } < Marshal.WriteIntPtr(optr, GCHandle.ToIntPtr(NpyCoreApi.AllocGCHandle(copy))); < } < } < } < < < public virtual object __setstate__(PythonTuple shape, dtype typecode, object fortran, object rawdata) { < return __setstate__(0, shape, typecode, fortran, rawdata); < } < < public virtual object __setstate__(int version, PythonTuple shape, dtype typecode, object fortran, object rawData) { < bool fortranFlag = NpyUtil_ArgProcessing.BoolConverter(fortran); < < if (version != 1 && version != 0) { < throw new ArgumentException( < String.Format("can't handle version {0} of numpy.ndarray pickle.", version)); < } < < IntPtr[] dimensions = NpyUtil_ArgProcessing.IntpArrConverter(shape); < int nd = dimensions.Length; < long size = dimensions.Aggregate(1L, (x, y) => x * (long)y); < < if (nd < 1) { < return null; < } < if (typecode.ElementSize == 0) { < throw new ArgumentException("Invalid data-type size"); < } < if (size < 0 || size > Int64.MaxValue / typecode.ElementSize) { < throw new InsufficientMemoryException(); < } < < if (typecode.ChkFlags(NpyDefs.NPY_LIST_PICKLE)) { < if (!(rawData is List)) { < throw new ArgumentTypeException("object pickle not returning list"); < } < } else { < if (!(rawData is string)) { < throw new ArgumentTypeException("pickle not returning string"); < } < if (((string)rawData).Length != typecode.itemsize * size) { < throw new ArgumentException("buffer size does not match array size"); < } < } < < // Set the state of this array using the passed in data. Everything in this array goes away. < // The .SetState method resizes/reallocated the data memory. < this.Dtype = typecode; < NpyCoreApi.SetState(this, dimensions, fortranFlag ? NpyDefs.NPY_ORDER.NPY_FORTRANORDER : NpyDefs.NPY_ORDER.NPY_CORDER, < rawData as string); < < if (rawData is List) { < flatiter iter = NpyCoreApi.IterNew(this); < foreach (object o in (List)rawData) { < if (!iter.MoveNext()) { < break; < } < iter.Current = o; < } < } < return null; < } < < < /// < /// Returns the length of dimension zero of the array < /// < /// Length of the first dimension < public virtual object __len__() { < if (ndim == 0) { < throw new ArgumentTypeException("len() of unsized object"); < } < return PythonOps.ToPython((IntPtr)Dims[0]); < } < < public object __abs__(CodeContext cntx) { < return UnaryOp(cntx, this, NpyDefs.NpyArray_Ops.npy_op_absolute); < } < < public ndarray __array__(CodeContext cntx, object descr = null) { < dtype newtype = null; < ndarray result; < < if (descr != null) { < newtype = NpyDescr.DescrConverter(cntx, descr); < } < if (GetType() != typeof(ndarray)) { < result = NpyCoreApi.FromArray(this, Dtype, NpyDefs.NPY_ENSUREARRAY); < } else { < result = this; < } < if (newtype == null || newtype == result.Dtype) { < return result; < } else { < return NpyCoreApi.CastToType(result, newtype, false); < } < } < < public ndarray __array_prepare__(ndarray a, params object[] args) { < return NpyCoreApi.ViewLike(a, this); < } < < public ndarray __array_wrap__(ndarray a) { < if (GetType() == a.GetType()) { < return a; < } else { < return NpyCoreApi.ViewLike(a, this); < } < } < < public object __divmod__(CodeContext cntx, Object b) { < return PythonOps.MakeTuple( < BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide), < BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_remainder)); < } < < public object __rdivmod__(CodeContext cntx, Object a) { < return PythonOps.MakeTuple( < BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_floor_divide), < BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_remainder)); < } < < public object __lshift__(CodeContext cntx, Object b) { < return BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_left_shift); < } < < public object __rlshift__(CodeContext cntx, Object a) { < return BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_left_shift); < } < < public object __rshift__(CodeContext cntx, Object b) { < return BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_right_shift); < } < < public object __rrshift__(CodeContext cntx, Object a) { < return BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_right_shift); < } < < public object __sqrt__(CodeContext cntx) { < return UnaryOp(cntx, this, NpyDefs.NpyArray_Ops.npy_op_sqrt); < } < < public object __mod__(CodeContext cntx, Object b) { < return BinaryOp(cntx, this, b, "remainder"); < } < < public object __rmod__(CodeContext cntx, Object a) { < return BinaryOp(cntx, a, this, "remainder"); < } < < #endregion < < #region Operators < < internal static object BinaryOp(CodeContext cntx, object a, object b, ufunc f, ndarray ret = null) { < if (cntx == null) { < cntx = NpyUtil_Python.DefaultContext; < } < try { < object result; < if (ret == null) { < result = f.Call(cntx, null, a, b); < } else { < result = f.Call(cntx, null, a, b, ret); < } < if (result.GetType() == typeof(ndarray)) { < return ArrayReturn((ndarray)result); < } else { < return result; < } < } catch (NotImplementedException) { < return cntx.LanguageContext.BuiltinModuleDict["NotImplemented"]; < } < } < < internal static object BinaryOp(CodeContext cntx, object a, object b, < NpyDefs.NpyArray_Ops op, ndarray ret = null) { < ufunc f = NpyCoreApi.GetNumericOp(op); < return BinaryOp(cntx, a, b, f, ret); < } < < internal static object BinaryOp(CodeContext cntx, object a, object b, < string fname, ndarray ret = null) { < ufunc f = ufunc.GetFunction(fname); < return BinaryOp(cntx, a, b, f, ret); < } < < < internal static object UnaryOp(CodeContext cntx, object a, NpyDefs.NpyArray_Ops op, < ndarray ret = null) { < if (cntx == null) { < cntx = NpyUtil_Python.DefaultContext; < } < ufunc f = NpyCoreApi.GetNumericOp(op); < object result; < if (ret == null) { < result = f.Call(cntx, null, a); < } else { < result = f.Call(cntx, null, a, ret); < } < if (result is ndarray) { < return ArrayReturn((ndarray)result); < } else { < return result; < } < } < < public static object operator +(ndarray a) { < return a; < } < < public static object operator +(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); < } < < public static object operator +(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); < } < < public static object operator +(ndarray a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); < } < < [SpecialName] < public object InPlaceAdd(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_add, this); < } < < [SpecialName] < public object InPlaceAdd(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_add, this); < } < < public static object operator -(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); < } < < public static object operator -(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); < } < < public static object operator -(ndarray a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); < } < < [SpecialName] < public object InPlaceSubtract(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_subtract, this); < } < < [SpecialName] < public object InPlaceSubtract(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_subtract, this); < } < < public static object operator -(ndarray a) { < return UnaryOp(null, a, NpyDefs.NpyArray_Ops.npy_op_negative); < } < < public static object operator *(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); < } < < public static object operator *(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); < } < < public static object operator *(ndarray a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); < } < < [SpecialName] < public object InPlaceMultiply(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_multiply, this); < } < < [SpecialName] < public object InPlaceMultiply(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_multiply, this); < } < < public static object operator /(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); < } < < public static object operator /(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); < } < < public static object operator /(ndarray a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); < } < < [SpecialName] < public object InPlaceDivide(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_divide, this); < } < < [SpecialName] < public object InPlaceDivide(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_divide, this); < } < < [SpecialName] < public object InPlaceTrueDivide(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_true_divide, this); < } < < [SpecialName] < public object InPlaceTrueDivide(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_true_divide, this); < } < < [SpecialName] < public object InPlaceFloorDivide(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide, this); < } < < [SpecialName] < public object InPlaceFloorDivide(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide, this); < } < < public object __pow__(object a) { < // TODO: Add optimizations for scalar powers < return BinaryOp(null, this, a, NpyDefs.NpyArray_Ops.npy_op_power); < } < < < < public static object operator &(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); < } < < public static object operator &(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); < } < < public static ndarray operator &(ndarray a, ndarray b) { < return (ndarray)BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); < } < < [SpecialName] < public object InPlaceBitwiseAnd(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and, this); < } < < [SpecialName] < public object InPlaceBitwiseAnd(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and, this); < } < < public static object operator |(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); < } < < public static object operator |(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); < } < < public static ndarray operator |(ndarray a, ndarray b) { < return (ndarray)BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); < } < < [SpecialName] < public object InPlaceBitwiseOr(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or, this); < } < < [SpecialName] < public object InPlaceBitwiseOr(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or, this); < } < < public static object operator ^(ndarray a, Object b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); < } < < public static object operator ^(object a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); < } < < public static object operator ^(ndarray a, ndarray b) { < return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); < } < < public static ndarray operator <<(ndarray a, int shift) { < return (ndarray)BinaryOp(null, a, shift, NpyDefs.NpyArray_Ops.npy_op_left_shift); < } < < public static ndarray operator >>(ndarray a, int shift) { < return (ndarray)BinaryOp(null, a, shift, NpyDefs.NpyArray_Ops.npy_op_right_shift); < } < < public static object Power(Object a, Object b) { < return BinaryOp(null, NpyArray.FromAny(a), b, NpyDefs.NpyArray_Ops.npy_op_power); < } < < [SpecialName] < public object InPlaceExclusiveOr(object b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor, this); < } < < [SpecialName] < public object InPlaceExclusiveOr(ndarray b) { < return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor, this); < } < < public static object operator ~(ndarray a) { < return UnaryOp(null, a, NpyDefs.NpyArray_Ops.npy_op_invert); < } < < public static implicit operator String(ndarray a) { < return StrFunction(a); < } < < // NOTE: For comparison operators we use the Python names < // since these operators usually return boolean arrays and < // .NET seems to expect them to return bool < < public object __eq__(CodeContext cntx, object o) { < if (o == null) { < return false; < } < NpyDefs.NPY_TYPES type = Dtype.TypeNum; < ndarray arrayother = o as ndarray; < if (arrayother == null) { < // Try to convert to an array. Return not equal on failure < try { < if (type != NpyDefs.NPY_TYPES.NPY_OBJECT) { < type = NpyDefs.NPY_TYPES.NPY_NOTYPE; < } < arrayother = NpyArray.FromAny(o, NpyCoreApi.DescrFromType(type), flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); < if (arrayother == null) { < return false; < } < } catch { < return false; < } < } < < // The next two blocks are ugly. First try equal with arguments in the expected < // order this == arrayother. If that fails with a not implemented issue or type < // error, then we retry with the arguments reversed. < object result = null; < try { < result = BinaryOp(cntx, this, arrayother, NpyDefs.NpyArray_Ops.npy_op_equal); < } catch (NotImplementedException) { < result = null; < } catch (ArgumentTypeException) { < result = null; < } < if (result == null || result == Builtin.NotImplemented) { < try { < result = BinaryOp(cntx, arrayother, this, NpyDefs.NpyArray_Ops.npy_op_equal); < } catch (NotImplementedException) { < result = Builtin.NotImplemented; < } < } < < if (result == Builtin.NotImplemented) { < if (type == NpyDefs.NPY_TYPES.NPY_VOID) { < if (Dtype != arrayother.Dtype) { < return false; < } < if (Dtype.HasNames) { < object res = null; < foreach (string name in Dtype.Names) { < ndarray a1 = NpyArray.EnsureAnyArray(this[name]); < ndarray a2 = NpyArray.EnsureAnyArray(arrayother[name]); < object eq = a1.__eq__(cntx, a2); < if (res == null) { < res = eq; < } else { < res = BinaryOp(cntx, res, eq, NpyDefs.NpyArray_Ops.npy_op_logical_and); < } < } < if (res == null) { < throw new ArgumentException("No fields found"); < } < return res; < } < result = NpyCoreApi.CompareStringArrays(this, arrayother, NpyDefs.NPY_COMPARE_OP.NPY_EQ); < } else { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_EQ); < } < } < return result; < } < < public object __req__(CodeContext cntx, object o) { < return __eq__(cntx, o); < } < < public object __ne__(CodeContext cntx, object o) { < if (o == null) { < return true; < } < NpyDefs.NPY_TYPES type = Dtype.TypeNum; < ndarray arrayother = o as ndarray; < if (arrayother == null) { < // Try to convert to an array. Return not equal on failure < try { < if (type == NpyDefs.NPY_TYPES.NPY_OBJECT) { < type = NpyDefs.NPY_TYPES.NPY_NOTYPE; < } < arrayother = NpyArray.FromAny(o, NpyCoreApi.DescrFromType(type), flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); < if (arrayother == null) { < return true; < } < } catch { < return true; < } < } < < object result = BinaryOp(cntx, this, arrayother, NpyDefs.NpyArray_Ops.npy_op_not_equal); < if (result == Builtin.NotImplemented) { < if (type == NpyDefs.NPY_TYPES.NPY_VOID) { < if (Dtype != arrayother.Dtype) { < return false; < } < if (Dtype.HasNames) { < object res = null; < foreach (string name in Dtype.Names) { < ndarray a1 = NpyArray.EnsureAnyArray(this[name]); < ndarray a2 = NpyArray.EnsureAnyArray(arrayother[name]); < object eq = a1.__ne__(cntx, a2); < if (res == null) { < res = eq; < } else { < res = BinaryOp(cntx, res, eq, NpyDefs.NpyArray_Ops.npy_op_logical_or); < } < } < if (res == null) { < throw new ArgumentException("No fields found"); < } < return res; < } < result = NpyCoreApi.CompareStringArrays(this, arrayother, NpyDefs.NPY_COMPARE_OP.NPY_NE); < } else { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_NE); < } < } < return result; < } < < public object __rne__(CodeContext cntx, object o) { < return __ne__(cntx, o); < } < < public object __lt__(CodeContext cntx, object o) { < object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_less); < if (result == Builtin.NotImplemented) { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_LT); < } < return result; < } < < public object __rlt__(CodeContext cntx, object o) { < return __ge__(cntx, o); < } < < public object __le__(CodeContext cntx, object o) { < object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_less_equal); < if (result == Builtin.NotImplemented) { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_LE); < } < return result; < } < < public object __rle__(CodeContext cntx, object o) { < return __gt__(cntx, o); < } < < public object __gt__(CodeContext cntx, object o) { < object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_greater); < if (result == Builtin.NotImplemented) { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_GT); < } < return result; < } < < public object __rgt__(CodeContext cntx, object o) { < return __le__(cntx, o); < } < < public object __ge__(CodeContext cntx, object o) { < object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_greater_equal); < if (result == Builtin.NotImplemented) { < result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_GE); < } < return result; < } < < public object __rge__(CodeContext cntx, object o) { < return __lt__(cntx, o); < } < < private object strings_compare(object o, NpyDefs.NPY_COMPARE_OP op) { < if (NpyDefs.IsString(Dtype.TypeNum)) { < ndarray self = this; < ndarray array_other = NpyArray.FromAny(o, flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); < if (self.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_UNICODE && < array_other.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING) { < dtype dt = new dtype(self.Dtype); < dt.ElementSize = array_other.Dtype.ElementSize*4; < array_other = NpyCoreApi.FromArray(array_other, dt, 0); < } else if (self.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING && < array_other.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_UNICODE) { < dtype dt = new dtype(array_other.Dtype); < dt.ElementSize = self.Dtype.ElementSize * 4; < self = NpyCoreApi.FromArray(self, dt, 0); < } < return ArrayReturn(NpyCoreApi.CompareStringArrays(self, array_other, op)); < } < return Builtin.NotImplemented; < } < < public object __int__(CodeContext cntx) { < if (Size != 1) { < throw new ArgumentException("only length 1 arrays can be converted to scalars"); < } < return NpyUtil_Python.CallBuiltin(cntx, "int", GetItem(0)); < } < < public object __long__(CodeContext cntx) { < if (Size != 1) { < throw new ArgumentException("only length 1 arrays can be converted to scalars"); < } < return NpyUtil_Python.CallBuiltin(cntx, "long", GetItem(0)); < } < < public object __float__(CodeContext cntx) { < if (Size != 1) { < throw new ArgumentException("only length 1 arrays can be converted to scalars"); < } < return NpyUtil_Python.CallBuiltin(cntx, "float", GetItem(0)); < } < < public object __floordiv__(CodeContext cntx, object o) { < return BinaryOp(null, this, o, NpyDefs.NpyArray_Ops.npy_op_floor_divide); < } < < public object __truediv__(CodeContext cntx, object o) { < return BinaryOp(null, this, o, NpyDefs.NpyArray_Ops.npy_op_true_divide); < } < < public object __complex__(CodeContext cntx) { < if (Size != 1) { < throw new ArgumentException("only length 1 arrays can be converted to scalars"); < } < return NpyUtil_Python.CallBuiltin(cntx, "complex", GetItem(0)); < } < < public bool __nonzero__() { < return (bool)this.any(); < } < < public static explicit operator bool(ndarray arr) { < int val = NpyCoreApi.ArrayBool(arr); < if (val < 0) { < NpyCoreApi.CheckError(); < return false; < } else { < return val != 0; < } < } < < public static explicit operator int(ndarray arr) { < object val = arr.__int__(null); < if (val is int) { < return (int)val; < } else { < throw new OverflowException(); < } < } < < public static explicit operator BigInteger(ndarray arr) { < return (BigInteger)arr.__long__(null); < } < < public static explicit operator double(ndarray arr) { < return (double)arr.__float__(null); < } < < public static explicit operator Complex(ndarray arr) { < return (Complex)arr.__complex__(null); < } < < #endregion < < #region indexing < < public object this[int index] { < get { < return ArrayItem((long)index); < } < } < < public object this[long index] { < get { < return ArrayItem(index); < } < } < < public object this[IntPtr index] { < get { < return ArrayItem(index.ToInt64()); < } < } < < public object this[BigInteger index] { < get { < long lIndex = (long)index; < return ArrayItem(lIndex); < } < } < < public Object this[params object[] args] { < get { < if (args == null) { < args = new object[] { null }; < } else { < if (args.Length == 1 && args[0] is PythonTuple) { < args = ((IEnumerable)args[0]).ToArray(); < } < < if (args.Length == 1 && args[0] is string) { < string field = (string)args[0]; < return ArrayReturn(NpyCoreApi.GetField(this, field)); < } < } < using (NpyIndexes indexes = new NpyIndexes()) < { < NpyUtil_IndexProcessing.IndexConverter(args, indexes); < if (indexes.IsSingleItem(ndim)) < { < // Optimization for single item index. < long offset = 0; < Int64[] dims = Dims; < Int64[] s = Strides; < for (int i = 0; i < ndim; i++) < { < long d = dims[i]; < long val = indexes.GetIntPtr(i).ToInt64(); < if (val < 0) < { < val += d; < } < if (val < 0 || val >= d) < { < throw new IndexOutOfRangeException(); < } < offset += val * s[i]; < } < return Dtype.ToScalar(this, offset); < } else if (indexes.IsMultiField) { < // Special case for multiple fields, transfer control back to Python. < // See PyArray_Subscript in mapping.c of the CPython API for similar. < return NpyUtil_Python.CallFunction(NpyUtil_Python.DefaultContext, "numpy.core._internal", < "_index_fields", this, args); < } < < < // General subscript case. < NpyCoreApi.Incref(Array); < ndarray result = NpyCoreApi.DecrefToInterface( < NpyCoreApi.ArraySubscript(this, indexes)); < NpyCoreApi.Decref(Array); < < if (result.ndim == 0) { < // We only want to return a scalar if there are not elipses < bool noelipses = true; < int n = indexes.NumIndexes; < for (int i = 0; i < n; i++) { < NpyIndexes.NpyIndexTypes t = indexes.IndexType(i); < if (t == NpyIndexes.NpyIndexTypes.ELLIPSIS || < t == NpyIndexes.NpyIndexTypes.STRING || < t == NpyIndexes.NpyIndexTypes.BOOL) { < noelipses = false; < break; < } < } < if (noelipses) { < return result.Dtype.ToScalar(this); < } < } < return result; < } < } < set { < if (!ChkFlags(NpyDefs.NPY_WRITEABLE)) { < throw new RuntimeException("array is not writeable."); < } < < if (args == null) { < args = new object[] { null }; < } else { < if (args.Length == 1 && args[0] is PythonTuple) { < PythonTuple pt = (PythonTuple)args[0]; < args = pt.ToArray(); < } < < if (args.Length == 1 && args[0] is string) { < string field = (string)args[0]; < if (!ChkFlags(NpyDefs.NPY_WRITEABLE)) { < throw new RuntimeException("array is not writeable."); < } < IntPtr descr; < int offset = NpyCoreApi.GetFieldOffset(Dtype, field, out descr); < if (offset < 0) { < throw new ArgumentException(String.Format("field name '{0}' not found.", field)); < } < NpyArray.SetField(this, descr, offset, value); < return; < } < } < < < using (NpyIndexes indexes = new NpyIndexes()) < { < NpyUtil_IndexProcessing.IndexConverter(args, indexes); < < // Special case for boolean on 0-d arrays. < if (ndim == 0 && indexes.NumIndexes == 1 && indexes.IndexType(0) == NpyIndexes.NpyIndexTypes.BOOL) < { < if (indexes.GetBool(0)) < { < SetItem(value, 0); < } < return; < } < < // Special case for single assignment. < long single_offset = indexes.SingleAssignOffset(this); < if (single_offset >= 0) < { < // This is a single item assignment. Use SetItem. < SetItem(value, single_offset); < return; < } < < if (indexes.IsSimple) < { < ndarray view = null; < try { < if (GetType() == typeof(ndarray)) { < view = NpyCoreApi.IndexSimple(this, indexes); < } else { < // Call through python to let the subtype returns the correct view < // TODO: Do we really need this? Why only for set with simple indexing? < CodeContext cntx = PythonOps.GetPythonTypeContext(DynamicHelpers.GetPythonType(this)); < object item = PythonOps.GetIndex(cntx, this, new PythonTuple(args)); < view = (item as ndarray); < if (view == null) { < throw new RuntimeException("Getitem not returning array"); < } < } < < NpyArray.CopyObject(view, value); < } finally { < if (view != null) { < view.Dispose(); < } < } < } < else < { < ndarray array_value = NpyArray.FromAny(value, Dtype, 0, 0, NpyDefs.NPY_FORCECAST, null); < try { < NpyCoreApi.Incref(array_value.Array); < if (NpyCoreApi.IndexFancyAssign(this, indexes, array_value) < 0) { < NpyCoreApi.CheckError(); < } < } finally { < NpyCoreApi.Decref(array_value.Array); < } < } < } < } < } < < #endregion < < #region properties < < /// < /// Number of dimensions in the array < /// < public int ndim { < get { return Marshal.ReadInt32(core, NpyCoreApi.ArrayOffsets.off_nd); } < } < < /// < /// Returns the size of each dimension as a tuple. < /// < public object shape { < get { return NpyUtil_Python.ToPythonTuple(this.Dims); } < set { < IntPtr[] shape = NpyUtil_ArgProcessing.IntpArrConverter(value); < NpyCoreApi.SetShape(this, shape); < } < } < < < /// < /// Total number of elements in the array. < /// < public object size { < get { return NpyCoreApi.ArraySize(this).ToPython(); } < } < < public PythonBuffer data { < get { < throw new NotImplementedException(); < } < } < < /// < /// Returns the reference count of the core array object. Used for debugging only. < /// < public int __coreRefCount__ { get { return Marshal.ReadInt32(Array, NpyCoreApi.Offset_RefCount); } } < < < /// < /// The type descriptor object for this array < /// < public dtype Dtype { < get { < if (core == IntPtr.Zero) return null; < IntPtr descr = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_descr); < return NpyCoreApi.ToInterface(descr); < } < set { < NpyCoreApi.ArraySetDescr(this, value); < } < } < < < /// < /// The type descriptor object for this array < /// < public object dtype { < get { < return this.Dtype; < } < set { < dtype descr = value as dtype; < if (descr == null) { < descr = NpyDescr.DescrConverter(NpyUtil_Python.DefaultContext, value); < } < NpyCoreApi.ArraySetDescr(this, descr); < } < } < < /// < /// Flags for this array < /// < public flagsobj flags { < get { < return new flagsobj(this); < } < } < < /// < /// Returns an array of the stride of each dimension. < /// < public Int64[] Strides { < get { return NpyCoreApi.GetArrayDimsOrStrides(this, false); } < } < < public PythonTuple strides { < get { return NpyUtil_Python.ToPythonTuple(Strides); } < } < < public object real { < get { < return NpyCoreApi.GetReal(this); < } < set { < ndarray val = NpyArray.FromAny(value, null, 0, 0, 0, null); < NpyCoreApi.MoveInto(NpyCoreApi.GetReal(this), val); < } < } < < public object imag { < get { < if (IsComplex) { < return NpyCoreApi.GetImag(this); < } else { < // TODO: np.zeros_like when we have it. < ndarray result = Copy(); < result.flat = 0; < return result; < } < } < set { < if (IsComplex) { < ndarray val = NpyArray.FromAny(value, null, 0, 0, 0, null); < NpyCoreApi.MoveInto(NpyCoreApi.GetImag(this), val); < } else { < throw new ArgumentTypeException("array does not have an imaginary part to set."); < } < } < } < < public object flat { < get { < return NpyCoreApi.IterNew(this); < } < set { < // Assing like a.flat[:] = value < flatiter it = NpyCoreApi.IterNew(this); < it[new Slice(null)] = value; < } < } < < public object @base { < get { < // TODO: Handle non-array bases < return BaseArray; < } < } < < public int itemsize { < get { < return Dtype.ElementSize; < } < } < < public object nbytes { < get { < return NpyUtil_Python.ToPython(itemsize*Size); < } < } < < public ndarray T { < get { < return Transpose(); < } < } < < public object ctypes { < get { < return NpyUtil_Python.CallFunction(null, "numpy.core._internal", < "_ctypes", this, UnsafeAddress.ToPython()); < } < } < < #endregion < < #region methods < < public int dump(CodeContext cntx, object file) < { < if (file is string) { < file = NpyUtil_Python.CallBuiltin(cntx, "open", file, "wb"); < } < NpyUtil_Python.CallFunction(cntx, "cPickle", "dump", this, file, 2); < return 0; < } < < public object dumps(CodeContext cntx) { < return NpyUtil_Python.CallFunction(cntx, "cPickle", "dumps", this, 2); < } < < public object all(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(All(iAxis, @out)); < } < < public object any(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(Any(iAxis, @out)); < } < < public object argmax(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(ArgMax(iAxis, @out)); < } < < public object argmin(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(ArgMin(iAxis, @out)); < } < < public object argsort(object axis = null, string kind = null, object order = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis, -1); < NpyDefs.NPY_SORTKIND sortkind = NpyUtil_ArgProcessing.SortkindConverter(kind); < < if (order != null) { < throw new NotImplementedException("Sort field order not yet implemented."); < } < < return ArrayReturn(ArgSort(iAxis, sortkind)); < } < < public object astype(CodeContext cntx, object dtype = null) { < dtype d = NpyDescr.DescrConverter(cntx, dtype); < if (d == this.Dtype) { < return this; < } < if (this.Dtype.HasNames) { < // CastToType doesn't work properly for < // record arrays, so we use FromArray. < int flags = NpyDefs.NPY_FORCECAST; < if (IsFortran) { < flags |= NpyDefs.NPY_FORTRAN; < } < return NpyCoreApi.FromArray(this, d, flags); < } < return NpyCoreApi.CastToType(this, d, this.IsFortran); < } < < public ndarray byteswap(bool inplace = false) { < return NpyCoreApi.Byteswap(this, inplace); < } < < private static string[] chooseArgNames = { "out", "mode" }; < < public object choose([ParamDictionary] IDictionary kwargs, < params object[] args){ < IEnumerable choices; < if (args == null) { < choices = new object[0]; < } < else if (args.Length == 1 && args[0] is IEnumerable) { < choices = (IEnumerable)args[0]; < } else { < choices = args; < } < object[] kargs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], chooseArgNames, kwargs); < ndarray aout = kargs[0] as ndarray; < NpyDefs.NPY_CLIPMODE clipMode = NpyUtil_ArgProcessing.ClipmodeConverter(kargs[1]); < return ArrayReturn(Choose(choices, aout, clipMode)); < } < < public object clip(object min = null, object max = null, ndarray @out = null) { < return Clip(min, max, @out); < } < < public ndarray compress(object condition, object axis = null, ndarray @out = null) { < ndarray aCondition = NpyArray.FromAny(condition, null, 0, 0, 0, null); < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < < if (aCondition.ndim != 1) { < throw new ArgumentException("condition must be 1-d array"); < } < < ndarray indexes = aCondition.NonZero()[0]; < return TakeFrom(indexes, iAxis, @out, NpyDefs.NPY_CLIPMODE.NPY_RAISE); < } < < public ndarray conj(ndarray @out = null) { < return conjugate(@out); < } < < public ndarray conjugate(ndarray @out = null) { < return Conjugate(@out); < } < < public object copy(object order = null) { < return ArrayReturn(Copy(order)); < } < < public ndarray Copy(object order = null) { < NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); < return NpyCoreApi.NewCopy(this, eOrder); < } < < public object cumprod(CodeContext cntx, object axis = null, object dtype = null, < ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return CumProd(iAxis, rtype, @out); < } < < public object cumsum(CodeContext cntx, object axis = null, object dtype = null, < ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return CumSum(iAxis, rtype, @out); < } < < < public ndarray diagonal(int offset = 0, int axis1 = 0, int axis2 = 1) { < return Diagonal(offset, axis1, axis2); < } < < public object dot(object other) { < return ModuleMethods.dot(this, other); < } < < public void fill(object scalar) { < FillWithScalar(scalar); < } < < public ndarray flatten(object order = null) { < NpyDefs.NPY_ORDER eOrder = < NpyUtil_ArgProcessing.OrderConverter(order); < return Flatten(eOrder); < } < < public ndarray getfield(CodeContext cntx, object dtype, int offset = 0) { < dtype dt = NpyDescr.DescrConverter(cntx, dtype); < return NpyCoreApi.GetField(this, dt, offset); < } < < public object item(params object[] args) { < if (args != null && args.Length == 1 && args[0] is PythonTuple) { < PythonTuple t = (PythonTuple)args[0]; < args = t.ToArray(); < } < if (args == null || args.Length == 0) { < if (ndim == 0 || Size == 1) { < return GetItem(0); < } else { < throw new ArgumentException("can only convert an array of size 1 to a Python scalar"); < } < } else { < using (NpyIndexes indexes = new NpyIndexes()) { < NpyUtil_IndexProcessing.IndexConverter(args, indexes); < if (args.Length == 1) { < if (indexes.IndexType(0) != NpyIndexes.NpyIndexTypes.INTP) { < throw new ArgumentException("invalid integer"); < } < // Do flat indexing < return Flat.Get(indexes.GetIntPtr(0)); < } else { < if (indexes.IsSingleItem(ndim)) { < long offset = indexes.SingleAssignOffset(this); < return GetItem(offset); < } else { < throw new ArgumentException("Incorrect number of indices for the array"); < } < } < } < } < } < < public void itemset(params object[] args) { < // Convert args to value and args < if (args == null || args.Length == 0) { < throw new ArgumentException("itemset must have at least one argument"); < } < object value = args.Last(); < args = args.Take(args.Length - 1).ToArray(); < < if (args.Length == 1 && args[0] is PythonTuple) { < PythonTuple t = (PythonTuple)args[0]; < args = t.ToArray(); < } < if (args.Length == 0) { < if (ndim == 0 || Size == 1) { < SetItem(value, 0); < } else { < throw new ArgumentException("can only convert an array of size 1 to a Python scalar"); < } < } else { < using (NpyIndexes indexes = new NpyIndexes()) { < NpyUtil_IndexProcessing.IndexConverter(args, indexes); < if (args.Length == 1) { < if (indexes.IndexType(0) != NpyIndexes.NpyIndexTypes.INTP) { < throw new ArgumentException("invalid integer"); < } < // Do flat indexing < Flat.SingleAssign(indexes.GetIntPtr(0), value); < } else { < if (indexes.IsSingleItem(ndim)) { < long offset = indexes.SingleAssignOffset(this); < SetItem(value, offset); < } else { < throw new ArgumentException("Incorrect number of indices for the array"); < } < } < } < } < } < < public object max(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(Max(iAxis, @out)); < } < < public object mean(CodeContext cntx, object axis = null, object dtype = null, < ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return Mean(iAxis, GetTypeDouble(this.Dtype, rtype), @out); < } < < public object min(object axis = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(Min(iAxis, @out)); < } < < public ndarray newbyteorder(string endian = null) { < dtype newtype = NpyCoreApi.DescrNewByteorder(Dtype, NpyUtil_ArgProcessing.ByteorderConverter(endian)); < return NpyCoreApi.View(this, newtype, null); < } < < public PythonTuple nonzero() { < return new PythonTuple(NonZero()); < } < < public object prod(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return ArrayReturn(Prod(iAxis, rtype, @out)); < } < < public object ptp(object axis = null, ndarray @out = null) < { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return Ptp(iAxis, @out); < } < < public void put(object indices, object values, object mode = null) < { < ndarray aIndices; < ndarray aValues; < NpyDefs.NPY_CLIPMODE eMode; < < aIndices = (indices as ndarray); < if (aIndices == null) { < aIndices = NpyArray.FromAny(indices, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), < 0, 0, NpyDefs.NPY_CARRAY, null); < } < aValues = (values as ndarray); < if (aValues == null) { < aValues = NpyArray.FromAny(values, Dtype, 0, 0, NpyDefs.NPY_CARRAY, null); < } < eMode = NpyUtil_ArgProcessing.ClipmodeConverter(mode); < PutTo(aValues, aIndices, eMode); < } < < public ndarray ravel(object order = null) { < NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); < return Ravel(eOrder); < } < < public object repeat(object repeats, object axis = null) { < ndarray aRepeats = (repeats as ndarray); < if (aRepeats == null) { < aRepeats = NpyArray.FromAny(repeats, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), < 0, 0, NpyDefs.NPY_CARRAY, null); < } < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < return ArrayReturn(Repeat(aRepeats, iAxis)); < } < < private static string[] reshapeKeywords = { "order" }; < < public ndarray reshape([ParamDictionary] IDictionary kwds, params object[] args) { < object[] keywordArgs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], reshapeKeywords, kwds); < NpyDefs.NPY_ORDER order = NpyUtil_ArgProcessing.OrderConverter(keywordArgs[0]); < IntPtr[] newshape; < // TODO: Add NpyArray_View call for (None) case. (Why?) < if (args == null) { < newshape = new IntPtr[0]; < } else if (args.Length == 1 && (args[0] is IList || args[0] is ndarray)) { < newshape = NpyUtil_ArgProcessing.IntpListConverter((IEnumerable)args[0]); < } else { < newshape = NpyUtil_ArgProcessing.IntpListConverter(args); < } < return NpyCoreApi.Newshape(this, newshape, order); < } < < public ndarray Reshape(IEnumerable shape, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_ANYORDER) { < return NpyCoreApi.Newshape(this, shape.Select(x => (IntPtr)x).ToArray(), order); < } < < private static string[] resizeKeywords = { "refcheck" }; < < public void resize([ParamDictionary] IDictionary kwds, params object[] args) { < object[] keywordArgs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], resizeKeywords, kwds); < bool refcheck = NpyUtil_ArgProcessing.BoolConverter(keywordArgs[0]); < IntPtr[] newshape; < < if (args == null || args.Length == 0 || args.Length == 1 && args[0] == null) { < return; < } < if (args.Length == 1 && args[0] is IList) { < newshape = NpyUtil_ArgProcessing.IntpListConverter((IList)args[0]); < } else { < newshape = NpyUtil_ArgProcessing.IntpListConverter(args); < } < Resize(newshape, refcheck, NpyDefs.NPY_ORDER.NPY_CORDER); < } < < public object round(int decimals = 0, ndarray @out = null) { < return Round(decimals, @out); < } < < public object searchsorted(object keys, string side = null) { < NpyDefs.NPY_SEARCHSIDE eSide = NpyUtil_ArgProcessing.SearchsideConverter(side); < ndarray aKeys = (keys as ndarray); < if (aKeys == null) { < aKeys = NpyArray.FromAny(keys, NpyArray.FindArrayType(keys, Dtype, NpyDefs.NPY_MAXDIMS), < 0, 0, NpyDefs.NPY_CARRAY, null); < } < return ArrayReturn(SearchSorted(aKeys, eSide)); < } < < public void setfield(CodeContext cntx, object value, object dtype, int offset = 0) { < dtype d = NpyDescr.DescrConverter(cntx, dtype); < NpyArray.SetField(this, d.Descr, offset, value); < } < < public void setflags(object write = null, object align = null, object uic = null) { < int flags = RawFlags; < if (align != null) { < bool bAlign = NpyUtil_ArgProcessing.BoolConverter(align); < if (bAlign) { < flags |= NpyDefs.NPY_ALIGNED; < } else { < if (!NpyCoreApi.IsAligned(this)) { < throw new ArgumentException("cannot set aligned flag of mis-aligned array to True"); < } < flags &= ~NpyDefs.NPY_ALIGNED; < } < } < if (uic != null) { < bool bUic = NpyUtil_ArgProcessing.BoolConverter(uic); < if (bUic) { < throw new ArgumentException("cannot set UPDATEIFCOPY flag to True"); < } else { < NpyCoreApi.ClearUPDATEIFCOPY(this); < } < } < if (write != null) { < bool bWrite = NpyUtil_ArgProcessing.BoolConverter(write); < if (bWrite) { < if (!NpyCoreApi.IsWriteable(this)) { < throw new ArgumentException("cannot set WRITEABLE flag to true on this array"); < } < flags |= NpyDefs.NPY_WRITEABLE; < } else { < flags &= ~NpyDefs.NPY_WRITEABLE; < } < } < RawFlags = flags; < } < < public void sort(int axis = -1, string kind = null, object order = null) { < NpyDefs.NPY_SORTKIND sortkind = NpyUtil_ArgProcessing.SortkindConverter(kind); < if (order != null) { < throw new NotImplementedException("Field sort order not yet implemented."); < } < Sort(axis, sortkind); < } < < public object squeeze() { < return Squeeze(); < } < < public object std(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null, int ddof = 0) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return Std(iAxis, GetTypeDouble(this.Dtype, rtype), @out, false, ddof); < } < < public object sum(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return ArrayReturn(Sum(iAxis, rtype, @out)); < } < < < public ndarray swapaxes(int a1, int a2) { < return SwapAxes(a1, a2); < } < < public ndarray swapaxes(object a1, object a2) { < int iA1 = NpyUtil_ArgProcessing.IntConverter(a1); < int iA2 = NpyUtil_ArgProcessing.IntConverter(a2); < return SwapAxes(iA1, iA2); < } < < < public object take(object indices, < object axis = null, < ndarray @out = null, < object mode = null) { < ndarray aIndices; < int iAxis; < NpyDefs.NPY_CLIPMODE cMode; < < aIndices = (indices as ndarray); < if (aIndices == null) { < aIndices = NpyArray.FromAny(indices, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), < 1, 0, NpyDefs.NPY_CONTIGUOUS, null); < } < iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < cMode = NpyUtil_ArgProcessing.ClipmodeConverter(mode); < return ArrayReturn(TakeFrom(aIndices, iAxis, @out, cMode)); < } < < public void tofile(CodeContext cntx, PythonFile file, string sep = null, string format = null) { < ToFile(cntx, file, sep, format); < } < < public void tofile(CodeContext cntx, string filename, string sep = null, string format = null) { < PythonFile f = (PythonFile)NpyUtil_Python.CallBuiltin(cntx, "open", filename, "wb"); < try { < tofile(cntx, f, sep, format); < } finally { < f.close(); < } < } < < public object tolist() { < if (ndim == 0) { < return GetItem(0); < } else { < List result = new List(); < long size = Dims[0]; < for (long i = 0; i < size; i++) { < result.append(NpyCoreApi.ArrayItem(this, i).tolist()); < } < return result; < } < } < < public Bytes tostring(object order = null) { < NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); < return ToString(eOrder); < } < < public object trace(CodeContext cntx, int offset = 0, int axis1 = 0, int axis2 = 1, < object dtype = null, ndarray @out = null) { < ndarray diag = Diagonal(offset, axis1, axis2); < return diag.sum(cntx, dtype:dtype, @out:@out); < } < < public ndarray transpose(params object[] args) { < if (args == null || args.Length == 0 || args.Length == 1 && args[0] == null) { < return Transpose(); < } else if (args.Length == 1 && args[0] is IList) { < return Transpose(NpyUtil_ArgProcessing.IntpListConverter((IList)args[0])); < } else { < return Transpose(NpyUtil_ArgProcessing.IntpListConverter(args)); < } < } < < public object var(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null, int ddof = 0) { < int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return Std(iAxis, GetTypeDouble(this.Dtype, rtype), @out, true, ddof); < } < < < public ndarray view(CodeContext cntx, object dtype = null, object type = null) { < if (dtype != null && type == null) { < if (IsNdarraySubtype(dtype)) { < type = dtype; < dtype = null; < } < } < < if (type != null && !IsNdarraySubtype(type)) { < throw new ArgumentException("Type must be a subtype of ndarray."); < } < dtype rtype = null; < if (dtype != null) { < rtype = NpyDescr.DescrConverter(cntx, dtype); < } < return NpyCoreApi.View(this, rtype, type); < } < < #endregion < < #endregion < < < public long Size { < get { return NpyCoreApi.ArraySize(this).ToInt64(); } < } < < public ndarray Real { < get { return NpyCoreApi.GetReal(this); } < } < < public ndarray Imag { < get { return NpyCoreApi.GetImag(this); } < } < < public override string ToString() { < return StrFunction(this); < } < < public flatiter Flat { < get { < return NpyCoreApi.IterNew(this); < } < } < < public ndarray NewCopy(NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) { < return NpyCoreApi.NewCopy(this, order); < } < < < /// < /// Directly accesses the array memory and returns the object at that < /// offset. No checks are made, caller can easily crash the program < /// or retrieve garbage data. < /// < /// Offset into data array in bytes < /// Contents of the location < internal object GetItem(long offset) { < return Dtype.f.GetItem(offset, this); < } < < < /// < /// Directly sets a given location in the data array. No checks are < /// made to make sure the offset is sensible or the data is valid in < /// anyway -- caller beware. < /// 'internal' because this is a security vulnerability. < /// < /// Value to write < /// Offset into array in bytes < internal void SetItem(object src, long offset) { < Dtype.f.SetItem(src, offset, this); < } < < < /// < /// Handle to the core representation. < /// < public IntPtr Array { < get { return core; } < } < < < /// < /// Base address of the array data memory. Use with caution. < /// < internal IntPtr DataAddress { < get { return Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_data); } < } < < /// < /// Returns an array of the sizes of each dimension. This property allocates < /// a new array with each call and must make a managed-to-native call so it's < /// worth caching the results if used in a loop. < /// < public Int64[] Dims { < get { return NpyCoreApi.GetArrayDimsOrStrides(this, true); } < } < < < /// < /// Returns the stride of a given dimension. For looping over all dimensions, < /// use 'strides'. This is more efficient if only one dimension is of interest. < /// < /// Dimension to query < /// Data stride in bytes < public long Stride(int dimension) { < return NpyCoreApi.GetArrayStride(this, dimension); < } < < < /// < /// True if memory layout of array is contiguous < /// < public bool IsContiguous { < get { return ChkFlags(NpyDefs.NPY_CONTIGUOUS); } < } < < public bool IsOneSegment { < get { return ndim == 0 || ChkFlags(NpyDefs.NPY_FORTRAN) || ChkFlags(NpyDefs.NPY_CARRAY); } < } < < /// < /// True if memory layout is Fortran order, false implies C order < /// < public bool IsFortran { < get { return ChkFlags(NpyDefs.NPY_FORTRAN) && ndim > 1; } < } < < public bool IsNotSwapped { < get { return Dtype.IsNativeByteOrder; } < } < < public bool IsByteSwapped { < get { return !IsNotSwapped; } < } < < public bool IsCArray { < get { return ChkFlags(NpyDefs.NPY_CARRAY) && IsNotSwapped; } < } < < public bool IsCArray_RO { < get { return ChkFlags(NpyDefs.NPY_CARRAY_RO) && IsNotSwapped; } < } < < public bool IsFArray { < get { return ChkFlags(NpyDefs.NPY_FARRAY) && IsNotSwapped; } < } < < public bool IsFArray_RO { < get { return ChkFlags(NpyDefs.NPY_FARRAY_RO) && IsNotSwapped; } < } < < public bool IsBehaved { < get { return ChkFlags(NpyDefs.NPY_BEHAVED) && IsNotSwapped; } < } < < public bool IsBehaved_RO { < get { return ChkFlags(NpyDefs.NPY_ALIGNED) && IsNotSwapped; } < } < < internal bool IsComplex { < get { return NpyDefs.IsComplex(Dtype.TypeNum); } < } < < internal bool IsInteger { < get { return NpyDefs.IsInteger(Dtype.TypeNum); } < } < < public bool IsFlexible { < get { return NpyDefs.IsFlexible(Dtype.TypeNum); } < } < < public bool IsWriteable { < get { return ChkFlags(NpyDefs.NPY_WRITEABLE); } < } < < public bool IsString { < get { return Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING; } < } < < < /// < /// TODO: What does this return? < /// < public int ElementStrides { < get { return NpyCoreApi.ElementStrides(this); } < } < < public bool StridingOk(NpyDefs.NPY_ORDER order) { < return order == NpyDefs.NPY_ORDER.NPY_ANYORDER || < order == NpyDefs.NPY_ORDER.NPY_CORDER && IsContiguous || < order == NpyDefs.NPY_ORDER.NPY_FORTRANORDER && IsFortran; < } < < private bool ChkFlags(int flag) { < return ((RawFlags & flag) == flag); < } < < // These operators are useful from other C# code and also turn into the < // appropriate Python functions (+ goes to __add__, etc). < < #region IEnumerable interface < < public IEnumerator GetEnumerator() { < return new ndarray_Enumerator(this); < } < < System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { < return new ndarray_Enumerator(this); < } < < #endregion < < #region Internal methods < < internal long Length { < get { < return Dims[0]; < } < } < < public static object ArrayReturn(ndarray a) { < if (a.ndim == 0) { < return a.Dtype.ToScalar(a); < } else { < return a; < } < } < private string BuildStringRepr(bool repr) { < // Equivalent to array_repr_builtin (arrayobject.c) < StringBuilder sb = new StringBuilder(); < if (repr) sb.Append("array("); < DumpData(sb, this.Dims, this.Strides, 0, 0); < < if (repr) { < if (NpyDefs.IsExtended(this.Dtype.TypeNum)) { < sb.AppendFormat(", '{0}{1}')", (char)Dtype.Type, this.Dtype.ElementSize); < } else { < sb.AppendFormat(", '{0}')", (char)Dtype.Type); < } < } < return sb.ToString(); < } < < /// < /// Recursively walks the array and appends a representation of each element < /// to the passed string builder. Square brackets delimit each array dimension. < /// < /// StringBuilder instance to append to < /// Array of size of each dimension < /// Offset in bytes to reach next element in each dimension < /// Index of the current dimension (starts at 0, recursively counts up) < /// Byte offset into data array, starts at 0 < private void DumpData(StringBuilder sb, long[] dimensions, long[] strides, < int dimIdx, long offset) { < < if (dimIdx == ndim) { < Object value = Dtype.f.GetItem(offset, this); < if (value == null) { < sb.Append("None"); < } else { < sb.Append((string)PythonOps.Repr(NpyUtil_Python.DefaultContext, value)); < } < } else { < sb.Append('['); < for (int i = 0; i < dimensions[dimIdx]; i++) { < DumpData(sb, dimensions, strides, dimIdx + 1, < offset + strides[dimIdx] * i); < if (i < dimensions[dimIdx] - 1) { < sb.Append(", "); < } < } < sb.Append(']'); < } < } < < #region Direct Typed Accessors < // BEWARE! These are direct memory accessors and ignore the type of the array. < // Yes, you can do clever things and yes, you can hang yourself, too. < < public unsafe int ReadAsInt32(long index) { < return *(int*)((long)UnsafeAddress + OffsetToItem(index)); < } < < public unsafe void WriteAsInt32(long index, int v) { < *(int*)((long)UnsafeAddress + OffsetToItem(index)) = v; < } < < public unsafe IntPtr ReadAsIntPtr(long index) { < return *(IntPtr*)((long)UnsafeAddress + OffsetToItem(index)); < } < < public unsafe void WriteAsIntPtr(long index, IntPtr v) { < *(IntPtr*)((long)UnsafeAddress + OffsetToItem(index)) = v; < } < < public unsafe long ReadAsInt64(long index) { < return *(long*)((long)UnsafeAddress + OffsetToItem(index)); < } < < public unsafe void WriteAsInt64(long index, long v) { < *(long*)((long)UnsafeAddress + OffsetToItem(index)) = v; < } < < public unsafe float ReadAsFloat(long index) { < return *(float*)((long)UnsafeAddress + OffsetToItem(index)); < } < < public unsafe void WriteAsFloat(long index, float v) { < *(float*)((long)UnsafeAddress + OffsetToItem(index)) = v; < } < < public unsafe double ReadAsDouble(long index) { < return *(double*)((long)UnsafeAddress + OffsetToItem(index)); < } < < public unsafe void WriteAsDouble(long index, double v) { < *(double*)((long)UnsafeAddress + OffsetToItem(index)) = v; < } < < private long OffsetToItem(long index) { < if (ndim > 1) { < throw new IndexOutOfRangeException("Only 1-d arrays are currently supported. Please use ArrayItem()."); < } < < long dim0 = Dims[0]; < if (index < 0) { < index += dim0; < } < if (index < 0 || index >= dim0) { < throw new IndexOutOfRangeException("Index out of range"); < } < return index * Strides[0]; < } < #endregion < < /// < /// Indexes an array by a single long and returns either an item or a sub-array. < /// < /// The index into the array < object ArrayItem(long index) { < if (ndim == 1) { < return Dtype.ToScalar(this, OffsetToItem(index)); < } else { < return NpyCoreApi.ArrayItem(this, index); < } < } < < internal Int32 RawFlags { < get { < return Marshal.ReadInt32(Array + NpyCoreApi.ArrayOffsets.off_flags); < } < set { < Marshal.WriteInt32(Array + NpyCoreApi.ArrayOffsets.off_flags, value); < } < } < < internal static dtype GetTypeDouble(dtype dtype1, dtype dtype2) { < if (dtype2 != null) { < return dtype2; < } < if (dtype1.TypeNum < NpyDefs.NPY_TYPES.NPY_FLOAT) { < return NpyCoreApi.DescrFromType(NpyDefs.NPY_TYPES.NPY_DOUBLE); < } else { < return dtype1; < } < } < < private static bool IsNdarraySubtype(object type) { < if (type == null) { < return false; < } < PythonType pt = type as PythonType; < if (pt == null) { < return false; < } < return PythonOps.IsSubClass(pt, DynamicHelpers.GetPythonTypeFromType(typeof(ndarray))); < } < < /// < /// Pointer to the internal memory. Should be used with great caution - memory < /// is native memory, not managed memory. < /// < public IntPtr UnsafeAddress { < get { return Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_data); } < } < < internal ndarray BaseArray { < get { < IntPtr p = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array); < if (p == IntPtr.Zero) { < return null; < } else { < return NpyCoreApi.ToInterface(p); < } < } < set { < lock (this) { < IntPtr p = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array); < if (p != IntPtr.Zero) { < NpyCoreApi.Decref(p); < } < NpyCoreApi.Incref(value.core); < Marshal.WriteIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array, value.core); < } < } < } < < /// < /// Copies data into the array from 'data'. Offset is the offset into this < /// array's data space in bytes. The number of bytes copied is based on the < /// element size of the array's dtype. < /// < /// Offset into this array's data (bytes) < /// Memory address to copy the data from < /// If true data is byte-swapped during copy < internal unsafe void CopySwapIn(long offset, void* data, bool swap) { < NpyCoreApi.CopySwapIn(this, offset, data, swap); < } < < /// < /// Copies data out of the array into 'data'. Offset is the offset into this < /// array's data space in bytes. Number of bytes copied is based on the < /// element size of the array's dtype. < /// < /// Offset into array's data in bytes < /// Memory address to copy the data to < /// If true, results are byte-swapped from the array's image < internal unsafe void CopySwapOut(long offset, void* data, bool swap) { < NpyCoreApi.CopySwapOut(this, offset, data, swap); < } < < #endregion < < #region Memory pressure handling < < // The GC only knows about the managed memory that has been allocated, < // not the large pool of native array data. This means that the GC < // may not run even if we are about to run out of memory. Adding < // memory pressure tells the GC how much native memory is associated < // with managed objects. < < /// < /// Track the total pressure allocated by numpy. This is just for < /// error checking and to make sure it goes back to 0 in the end. < /// < private static long TotalMemPressure = 0; < < < /// < /// Memory pressure reserved for this instance in bytes to be released on dispose. < /// < private long reservedMemPressure = 0; < < internal static void IncreaseMemoryPressure(ndarray arr) { < if (arr.flags.owndata) { < int newBytes = (int)(arr.Size * arr.Dtype.ElementSize); < if (newBytes == 0) { < return; < } < < // Stupid annoying hack. What happens is the finalizer queue < // is processed by a low-priority background thread and can fall < // behind, allowing memory to be filled if the primary thread is < // creating garbage faster than the finalizer thread is cleaning < // it up. This is a heuristic to cause the main thread to pause < // when needed. All of this is necessary because the ndarray < // object defines a finalizer, which most .NET objects don't have < // and .NET doesn't appear well optimized for cases with huge < // numbers of finalizable objects. < // TODO: What do we do for a collection heuristic for 64-bit? Don't < // want to collect too often but don't want to page either. < if (IntPtr.Size == 4 && < (TotalMemPressure > 1500000000 || TotalMemPressure + newBytes > 1700000000)) { < System.GC.Collect(); < System.GC.WaitForPendingFinalizers(); < } < < System.Threading.Interlocked.Add(ref TotalMemPressure, newBytes); < System.GC.AddMemoryPressure(newBytes); < arr.reservedMemPressure = newBytes; < //Console.WriteLine("Added {0} bytes of pressure, now {1}", < // newBytes, TotalMemPressure); < } < } < < internal static void DecreaseMemoryPressure(long numBytes) { < System.Threading.Interlocked.Add(ref TotalMemPressure, -numBytes); < if (numBytes > 0) { < System.GC.RemoveMemoryPressure(numBytes); < } < //Console.WriteLine("Removed {0} bytes of pressure, now {1}", < // newBytes, TotalMemPressure); < } < < #endregion < < #region Buffer protocol < < public IExtBufferProtocol GetBuffer(NpyBuffer.PyBuf flags) { < return new ndarrayBufferAdapter(this, flags); < } < < public IExtBufferProtocol GetPyBuffer(int flags) { < return GetBuffer((NpyBuffer.PyBuf)flags); < } < < /// < /// Adapts an instance that implements IBufferProtocol and IPythonBufferable < /// to the IExtBufferProtocol. < /// < private class ndarrayBufferAdapter : IExtBufferProtocol < { < internal ndarrayBufferAdapter(ndarray a, NpyBuffer.PyBuf flags) { < arr = a; < < if ((flags & NpyBuffer.PyBuf.C_CONTIGUOUS) == NpyBuffer.PyBuf.C_CONTIGUOUS && < !arr.ChkFlags(NpyDefs.NPY_C_CONTIGUOUS)) { < throw new ArgumentException("ndarray is not C-continuous"); < } < if ((flags & NpyBuffer.PyBuf.F_CONTIGUOUS) == NpyBuffer.PyBuf.F_CONTIGUOUS && < !arr.ChkFlags(NpyDefs.NPY_F_CONTIGUOUS)) { < throw new ArgumentException("ndarray is not F-continuous"); < } < if ((flags & NpyBuffer.PyBuf.ANY_CONTIGUOUS) == NpyBuffer.PyBuf.ANY_CONTIGUOUS && < !arr.IsOneSegment) { < throw new ArgumentException("ndarray is not contiguous"); < } < if ((flags & NpyBuffer.PyBuf.STRIDES) != NpyBuffer.PyBuf.STRIDES && < (flags & NpyBuffer.PyBuf.ND) == NpyBuffer.PyBuf.ND && < !arr.ChkFlags(NpyDefs.NPY_C_CONTIGUOUS)) { < throw new ArgumentException("ndarray is not c-contiguous"); < } < if ((flags & NpyBuffer.PyBuf.WRITABLE) == NpyBuffer.PyBuf.WRITABLE && < !arr.IsWriteable) { < throw new ArgumentException("ndarray is not writable"); < } < < readOnly = ((flags & NpyBuffer.PyBuf.WRITABLE) == 0); < ndim = ((flags & NpyBuffer.PyBuf.ND) == 0) ? 0 : arr.ndim; < shape = ((flags & NpyBuffer.PyBuf.ND) == 0) ? null : arr.Dims; < strides = ((flags & NpyBuffer.PyBuf.STRIDES) == 0) ? null : arr.Strides; < < if ((flags & NpyBuffer.PyBuf.FORMAT) == 0) { < // Force an array of unsigned bytes. < itemCount = arr.Size * arr.Dtype.ElementSize; < itemSize = sizeof(byte); < format = null; < } else { < itemCount = arr.Length; < itemSize = arr.Dtype.ElementSize; < format = NpyCoreApi.GetBufferFormatString(arr); < } < } < < #region IExtBufferProtocol < < long IExtBufferProtocol.ItemCount { < get { return itemCount; } < } < < string IExtBufferProtocol.Format { < get { return format; } < } < < int IExtBufferProtocol.ItemSize { < get { return itemSize; } < } < < int IExtBufferProtocol.NumberDimensions { < get { return ndim; } < } < < bool IExtBufferProtocol.ReadOnly { < get { return readOnly; } < } < < IList IExtBufferProtocol.Shape { < get { return shape; } < } < < long[] IExtBufferProtocol.Strides { < get { return strides; } < } < < long[] IExtBufferProtocol.SubOffsets { < get { < long[] s = new long[ndim]; < for (int i = 0; i < s.Length; i++) s[i] = -1; < return s; < } < } < < IntPtr IExtBufferProtocol.UnsafeAddress { < get { return arr.DataAddress; } < } < < /// < /// Total number of bytes in the array < /// < long IExtBufferProtocol.Size { < get { return arr.Size; } < } < < #endregion < < private readonly ndarray arr; < private readonly bool readOnly; < private readonly long itemCount; < private readonly string format; < private readonly int ndim; < private readonly int itemSize; < private readonly IList shape; < private readonly long[] strides; < < } < < #endregion < } < < internal class ndarray_Enumerator : IEnumerator < { < public ndarray_Enumerator(ndarray a) { < arr = a; < index = -1; < } < < public object Current { < get { return arr[(int)index]; } < } < < public void Dispose() { < arr = null; < } < < < public bool MoveNext() { < index += 1; < return (index < arr.Dims[0]); < } < < public void Reset() { < index = -1; < } < < private ndarray arr; < private long index; < } < } --- > ???using System; > using System.Collections; > using System.Collections.Generic; > using System.Linq; > using System.Text; > using System.Runtime.InteropServices; > using System.Runtime.CompilerServices; > using System.Reflection; > using System.Numerics; > using IronPython.Modules; > using IronPython.Runtime; > using IronPython.Runtime.Operations; > using IronPython.Runtime.Types; > using IronPython.Runtime.Exceptions; > using Microsoft.Scripting; > using Microsoft.Scripting.Runtime; > > > namespace NumpyDotNet > { > /// > /// Implements the Numpy python 'ndarray' object and acts as an interface to > /// the core NpyArray data structure. Npy_INTERFACE(NpyArray *) points an > /// instance of this class. > /// > [PythonType] > public partial class ndarray : Wrapper, IEnumerable, IBufferProvider, NumpyDotNet.IArray > { > public const string __module__ = "numpy"; > > public ndarray() { > } > > public static ndarray __new__(CodeContext cntx, PythonType cls, > object shape, object dtype = null, > object buffer = null, object offset = null, > object strides = null, object order = null) { > ndarray result = (ndarray)ObjectOps.__new__(cntx, cls); > result.Construct(cntx, shape, dtype, buffer, offset, strides, order); > return result; > } > > internal void Construct(CodeContext cntx, object shape, object dtype = null, > object buffer = null, object offset = null, > object strides = null, object order = null) { > dtype type = null; > > core = IntPtr.Zero; > > long[] aShape = NpyUtil_ArgProcessing.IntArrConverter(shape); > if (dtype != null) { > type = NpyDescr.DescrConverter(cntx, dtype); > } > > if (buffer != null) > throw new NotImplementedException("Buffer support is not implemented."); > long loffset = NpyUtil_ArgProcessing.IntConverter(offset); > long[] aStrides = NpyUtil_ArgProcessing.IntArrConverter(strides); > NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); > > if (type == null) > type = NpyCoreApi.DescrFromType(NpyDefs.DefaultType); > > int itemsize = type.ElementSize; > if (itemsize == 0) { > throw new ArgumentException("data-type with unspecified variable length"); > } > > if (aStrides != null) { > if (aStrides.Length != aShape.Length) { > throw new ArgumentException("strides, if given, must be the same length as shape"); > } > > if (!NpyArray.CheckStrides(itemsize, aShape, aStrides)) { > throw new ArgumentException("strides is compatible with shape of requested array and size of buffer"); > } > } > > // Creates a new array object. By passing 'this' in the current instance > // becomes the wrapper object for the new array. > ndarray wrap = NpyCoreApi.NewFromDescr(type, aShape, aStrides, 0, > new NpyCoreApi.UseExistingWrapper { Wrapper = this }); > if (wrap != this) { > throw new InvalidOperationException("Internal error: returned array wrapper is different than current instance."); > } > // NOTE: CPython fills object arrays with Py_None here. We don't > // need to do this since None is null and the arrays are zero filled. > } > > protected override void Dispose(bool disposing) { > if (core != IntPtr.Zero) { > lock (this) { > if (reservedMemPressure > 0) > DecreaseMemoryPressure(reservedMemPressure); > base.Dispose(disposing); > } > } > } > > /// > /// Danger! This method is only intended to be used indirectly during construction > /// when the new instance is passed into the core as the 'interfaceData' field so > /// ArrayNewWrapper can pair up this instance with a core object. If this pointer > /// is changed after pairing, bad things can happen. > /// > /// Core object to be paired with this wrapper > internal void SetArray(IntPtr a) { > if (core == null) { > throw new InvalidOperationException("Attempt to change core array object for already-constructed wrapper."); > } > core = a; > } > > > #region Public interfaces (must match CPython) > > private static Func reprFunction; > private static Func strFunction; > > /// > /// Sets a function to be triggered for the repr() operator or null to default to the > /// built-in version. > /// > public static Func ReprFunction { > get { return reprFunction; } > internal set { reprFunction = (value != null) ? value : x => x.BuildStringRepr(true); } > } > > /// > /// Sets a function to be triggered on the str() operator or ToString() method. Null defaults to > /// the built-in version. > /// > public static Func StrFunction { > get { return strFunction; } > internal set { strFunction = (value != null) ? value : x => x.BuildStringRepr(false); } > } > > static ndarray() { > ReprFunction = null; > StrFunction = null; > } > > #region Python methods > > public virtual string __repr__(CodeContext cntx) { > return ReprFunction(this); > } > > public virtual string __str__(CodeContext cntx) { > return StrFunction(this); > } > > public virtual object __reduce__(CodeContext cntx, object notused=null) { > const int version = 1; > > // Result is a tuple of (callable object, arguments, object's state). > object[] ret = new object[3]; > ret[0] = NpyUtil_Python.GetModuleAttr(cntx, "numpy.core.multiarray", "_reconstruct"); > if (ret[0] == null) return null; > > ret[1] = PythonOps.MakeTuple(DynamicHelpers.GetPythonType(this), PythonOps.MakeTuple(0), "b"); > > // Fill in the object's state. This is a tuple with 5 argumentS: > // 1) an integer with the pickle version > // 2) a Tuple giving the shape > // 3) a dtype object with the correct byteorder set > // 4) a Bool stating if Fortran or not > // 5) a Python object representing the data (a string or list or something) > object[] state = new object[5]; > state[0] = version; > state[1] = this.shape; > state[2] = this.Dtype; > state[3] = this.IsFortran; > state[4] = Dtype.ChkFlags(NpyDefs.NPY_LIST_PICKLE) ? GetPickleList() : ToBytes(); > > ret[2] = new PythonTuple(state); > return new PythonTuple(ret); > } > > > /// > /// Generates a string containing the byte representation of the array. This is quite > /// inefficient as the string (being 16-bit unicode) is twice the size needed, but this > /// is what the pickler uses. Ugh. > /// > /// Desired output order, default is array's current order > /// String containing data bytes > private String ToBytes(NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_ANYORDER) { > if (order == NpyDefs.NPY_ORDER.NPY_ANYORDER) { > order = IsFortran ? NpyDefs.NPY_ORDER.NPY_FORTRANORDER : NpyDefs.NPY_ORDER.NPY_CORDER; > } > > long size = itemsize * Size; > if (size >= Int32.MaxValue) { > throw new NotImplementedException("Total array size exceeds 2GB limit imposed by .NET string size, unable to pickle array."); > } > > string result; > if (IsContiguous && order == NpyDefs.NPY_ORDER.NPY_CORDER || > IsFortran && order == NpyDefs.NPY_ORDER.NPY_FORTRANORDER) { > unsafe { > result = new string((sbyte*)UnsafeAddress, 0, (int)size); > } > } else { > // TODO: Implementation requires some thought to implement to try to avoid making multiple copies of > // the data. The issue is that we have to return a string. We can allocate a string of the appropriate > // size, but it is immutable. StringBuilder works, but we end up copying. Can do it in C, but end up > // copying in C, then copy into String. Ugh. > throw new NotImplementedException("Pickling of non-contiguous arrays or transposing arrays is not supported"); > } > return result; > } > > private object GetPickleList() { > List list = new List(); > for (flatiter iter = this.Flat; iter.MoveNext(); list.append(iter.Current)) ; > return list; > } > > public virtual object __setstate__(PythonTuple t) { > if (t.Count == 4) { > return __setstate__(0, (PythonTuple)t[0], (dtype)t[1], t[2], t[3]); > } else if (t.Count == 5) { > return __setstate__((int)t[0], (PythonTuple)t[1], (dtype)t[2], t[3], t[4]); > } else { > throw new NotImplementedException( > String.Format("Unhandled pickle format with {0} arguments.", t.Count)); > } > } > > > /// > /// Duplicates the array, performing a deepcopy of the array and all contained objects. > /// > /// Passed to the Python copy.deepcopy() routine > /// duplicated array > public object __deepcopy__(object visit) { > ndarray ret = this.Copy(); > if (ret.Dtype.IsObject) { > IntPtr optr = ret.UnsafeAddress; > flatiter it = this.Flat; > while (it.MoveNext()) { > deepcopy_call(it.CurrentPtr, optr, this, this.Dtype, visit); > optr = optr + this.Dtype.itemsize; > } > } > return ret; > } > > > /// > /// Recursive function to copy object element, even when the element is a record with > /// fields containing objects. > /// > /// Pointer to the start of the input element > /// Pointer to the destination element > /// Source array > /// Element type descriptor > /// Passed to Python copy.deepcopy() function > private void deepcopy_call(IntPtr iptr, IntPtr optr, ndarray arr, dtype type, object visit) { > if (type.IsObject) { > if (type.HasNames) { > // Check each field and recursively process any that contain object references. > PythonDictionary fields = Dtype.Fields; > foreach (KeyValuePair i in fields) { > string key = (string)i.Key; > PythonTuple value = (PythonTuple)i.Value; > if (value.Count == 3 && (string)value[2] == key) continue; > > dtype subtype = (dtype)value[0]; > int offset = (int)value[1]; > > deepcopy_call(iptr + offset, optr + offset, arr, subtype, visit); > } > } else { > object current = type.f.GetItem((long)iptr - (long)arr.UnsafeAddress, arr); > object copy = NpyUtil_Python.CallFunction(NpyUtil_Python.DefaultContext, "copy", "deepcopy", > current, visit); > IntPtr otemp = Marshal.ReadIntPtr(optr); > if (otemp != IntPtr.Zero) { > NpyCoreApi.FreeGCHandle( NpyCoreApi.GCHandleFromIntPtr(otemp) ); > } > Marshal.WriteIntPtr(optr, GCHandle.ToIntPtr(NpyCoreApi.AllocGCHandle(copy))); > } > } > } > > > public virtual object __setstate__(PythonTuple shape, dtype typecode, object fortran, object rawdata) { > return __setstate__(0, shape, typecode, fortran, rawdata); > } > > public virtual object __setstate__(int version, PythonTuple shape, dtype typecode, object fortran, object rawData) { > bool fortranFlag = NpyUtil_ArgProcessing.BoolConverter(fortran); > > if (version != 1 && version != 0) { > throw new ArgumentException( > String.Format("can't handle version {0} of numpy.ndarray pickle.", version)); > } > > IntPtr[] dimensions = NpyUtil_ArgProcessing.IntpArrConverter(shape); > int nd = dimensions.Length; > long size = dimensions.Aggregate(1L, (x, y) => x * (long)y); > > if (nd < 1) { > return null; > } > if (typecode.ElementSize == 0) { > throw new ArgumentException("Invalid data-type size"); > } > if (size < 0 || size > Int64.MaxValue / typecode.ElementSize) { > throw new InsufficientMemoryException(); > } > > if (typecode.ChkFlags(NpyDefs.NPY_LIST_PICKLE)) { > if (!(rawData is List)) { > throw new ArgumentTypeException("object pickle not returning list"); > } > } else { > if (!(rawData is string)) { > throw new ArgumentTypeException("pickle not returning string"); > } > if (((string)rawData).Length != typecode.itemsize * size) { > throw new ArgumentException("buffer size does not match array size"); > } > } > > // Set the state of this array using the passed in data. Everything in this array goes away. > // The .SetState method resizes/reallocated the data memory. > this.Dtype = typecode; > NpyCoreApi.SetState(this, dimensions, fortranFlag ? NpyDefs.NPY_ORDER.NPY_FORTRANORDER : NpyDefs.NPY_ORDER.NPY_CORDER, > rawData as string); > > if (rawData is List) { > flatiter iter = NpyCoreApi.IterNew(this); > foreach (object o in (List)rawData) { > if (!iter.MoveNext()) { > break; > } > iter.Current = o; > } > } > return null; > } > > > /// > /// Returns the length of dimension zero of the array > /// > /// Length of the first dimension > public virtual object __len__() { > if (ndim == 0) { > throw new ArgumentTypeException("len() of unsized object"); > } > return PythonOps.ToPython((IntPtr)Dims[0]); > } > > public object __abs__(CodeContext cntx) { > return UnaryOp(cntx, this, NpyDefs.NpyArray_Ops.npy_op_absolute); > } > > public ndarray __array__(CodeContext cntx, object descr = null) { > dtype newtype = null; > ndarray result; > > if (descr != null) { > newtype = NpyDescr.DescrConverter(cntx, descr); > } > if (GetType() != typeof(ndarray)) { > result = NpyCoreApi.FromArray(this, Dtype, NpyDefs.NPY_ENSUREARRAY); > } else { > result = this; > } > if (newtype == null || newtype == result.Dtype) { > return result; > } else { > return NpyCoreApi.CastToType(result, newtype, false); > } > } > > public ndarray __array_prepare__(ndarray a, params object[] args) { > return NpyCoreApi.ViewLike(a, this); > } > > public ndarray __array_wrap__(ndarray a) { > if (GetType() == a.GetType()) { > return a; > } else { > return NpyCoreApi.ViewLike(a, this); > } > } > > public object __divmod__(CodeContext cntx, Object b) { > return PythonOps.MakeTuple( > BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide), > BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_remainder)); > } > > public object __rdivmod__(CodeContext cntx, Object a) { > return PythonOps.MakeTuple( > BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_floor_divide), > BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_remainder)); > } > > public object __lshift__(CodeContext cntx, Object b) { > return BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_left_shift); > } > > public object __rlshift__(CodeContext cntx, Object a) { > return BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_left_shift); > } > > public object __rshift__(CodeContext cntx, Object b) { > return BinaryOp(cntx, this, b, NpyDefs.NpyArray_Ops.npy_op_right_shift); > } > > public object __rrshift__(CodeContext cntx, Object a) { > return BinaryOp(cntx, a, this, NpyDefs.NpyArray_Ops.npy_op_right_shift); > } > > public object __sqrt__(CodeContext cntx) { > return UnaryOp(cntx, this, NpyDefs.NpyArray_Ops.npy_op_sqrt); > } > > public object __mod__(CodeContext cntx, Object b) { > return BinaryOp(cntx, this, b, "remainder"); > } > > public object __rmod__(CodeContext cntx, Object a) { > return BinaryOp(cntx, a, this, "remainder"); > } > > #endregion > > #region Operators > > internal static object BinaryOp(CodeContext cntx, object a, object b, ufunc f, ndarray ret = null) { > if (cntx == null) { > cntx = NpyUtil_Python.DefaultContext; > } > try { > object result; > if (ret == null) { > result = f.Call(cntx, null, a, b); > } else { > result = f.Call(cntx, null, a, b, ret); > } > if (result.GetType() == typeof(ndarray)) { > return ArrayReturn((ndarray)result); > } else { > return result; > } > } catch (NotImplementedException) { > return cntx.LanguageContext.BuiltinModuleDict["NotImplemented"]; > } > } > > internal static object BinaryOp(CodeContext cntx, object a, object b, > NpyDefs.NpyArray_Ops op, ndarray ret = null) { > ufunc f = NpyCoreApi.GetNumericOp(op); > return BinaryOp(cntx, a, b, f, ret); > } > > internal static object BinaryOp(CodeContext cntx, object a, object b, > string fname, ndarray ret = null) { > ufunc f = ufunc.GetFunction(fname); > return BinaryOp(cntx, a, b, f, ret); > } > > > internal static object UnaryOp(CodeContext cntx, object a, NpyDefs.NpyArray_Ops op, > ndarray ret = null) { > if (cntx == null) { > cntx = NpyUtil_Python.DefaultContext; > } > ufunc f = NpyCoreApi.GetNumericOp(op); > object result; > if (ret == null) { > result = f.Call(cntx, null, a); > } else { > result = f.Call(cntx, null, a, ret); > } > if (result is ndarray) { > return ArrayReturn((ndarray)result); > } else { > return result; > } > } > > public static object operator +(ndarray a) { > return a; > } > > public static object operator +(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); > } > > public static object operator +(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); > } > > public static object operator +(ndarray a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_add); > } > > [SpecialName] > public object InPlaceAdd(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_add, this); > } > > [SpecialName] > public object InPlaceAdd(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_add, this); > } > > public static object operator -(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); > } > > public static object operator -(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); > } > > public static object operator -(ndarray a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_subtract); > } > > [SpecialName] > public object InPlaceSubtract(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_subtract, this); > } > > [SpecialName] > public object InPlaceSubtract(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_subtract, this); > } > > public static object operator -(ndarray a) { > return UnaryOp(null, a, NpyDefs.NpyArray_Ops.npy_op_negative); > } > > public static object operator *(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); > } > > public static object operator *(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); > } > > public static object operator *(ndarray a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_multiply); > } > > [SpecialName] > public object InPlaceMultiply(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_multiply, this); > } > > [SpecialName] > public object InPlaceMultiply(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_multiply, this); > } > > public static object operator /(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); > } > > public static object operator /(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); > } > > public static object operator /(ndarray a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_divide); > } > > [SpecialName] > public object InPlaceDivide(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_divide, this); > } > > [SpecialName] > public object InPlaceDivide(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_divide, this); > } > > [SpecialName] > public object InPlaceTrueDivide(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_true_divide, this); > } > > [SpecialName] > public object InPlaceTrueDivide(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_true_divide, this); > } > > [SpecialName] > public object InPlaceFloorDivide(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide, this); > } > > [SpecialName] > public object InPlaceFloorDivide(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_floor_divide, this); > } > > public object __pow__(object a) { > // TODO: Add optimizations for scalar powers > return BinaryOp(null, this, a, NpyDefs.NpyArray_Ops.npy_op_power); > } > > > > public static object operator &(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); > } > > public static object operator &(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); > } > > public static ndarray operator &(ndarray a, ndarray b) { > return (ndarray)BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and); > } > > [SpecialName] > public object InPlaceBitwiseAnd(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and, this); > } > > [SpecialName] > public object InPlaceBitwiseAnd(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_and, this); > } > > public static object operator |(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); > } > > public static object operator |(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); > } > > public static ndarray operator |(ndarray a, ndarray b) { > return (ndarray)BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or); > } > > [SpecialName] > public object InPlaceBitwiseOr(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or, this); > } > > [SpecialName] > public object InPlaceBitwiseOr(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_or, this); > } > > public static object operator ^(ndarray a, Object b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); > } > > public static object operator ^(object a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); > } > > public static object operator ^(ndarray a, ndarray b) { > return BinaryOp(null, a, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor); > } > > public static ndarray operator <<(ndarray a, int shift) { > return (ndarray)BinaryOp(null, a, shift, NpyDefs.NpyArray_Ops.npy_op_left_shift); > } > > public static ndarray operator >>(ndarray a, int shift) { > return (ndarray)BinaryOp(null, a, shift, NpyDefs.NpyArray_Ops.npy_op_right_shift); > } > > public static object Power(Object a, Object b) { > return BinaryOp(null, NpyArray.FromAny(a), b, NpyDefs.NpyArray_Ops.npy_op_power); > } > > [SpecialName] > public object InPlaceExclusiveOr(object b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor, this); > } > > [SpecialName] > public object InPlaceExclusiveOr(ndarray b) { > return BinaryOp(null, this, b, NpyDefs.NpyArray_Ops.npy_op_bitwise_xor, this); > } > > public static object operator ~(ndarray a) { > return UnaryOp(null, a, NpyDefs.NpyArray_Ops.npy_op_invert); > } > > public static implicit operator String(ndarray a) { > return StrFunction(a); > } > > // NOTE: For comparison operators we use the Python names > // since these operators usually return boolean arrays and > // .NET seems to expect them to return bool > > public object __eq__(CodeContext cntx, object o) { > if (o == null) { > return false; > } > NpyDefs.NPY_TYPES type = Dtype.TypeNum; > ndarray arrayother = o as ndarray; > if (arrayother == null) { > // Try to convert to an array. Return not equal on failure > try { > if (type != NpyDefs.NPY_TYPES.NPY_OBJECT) { > type = NpyDefs.NPY_TYPES.NPY_NOTYPE; > } > arrayother = NpyArray.FromAny(o, NpyCoreApi.DescrFromType(type), flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); > if (arrayother == null) { > return false; > } > } catch { > return false; > } > } > > // The next two blocks are ugly. First try equal with arguments in the expected > // order this == arrayother. If that fails with a not implemented issue or type > // error, then we retry with the arguments reversed. > object result = null; > try { > result = BinaryOp(cntx, this, arrayother, NpyDefs.NpyArray_Ops.npy_op_equal); > } catch (NotImplementedException) { > result = null; > } catch (ArgumentTypeException) { > result = null; > } > if (result == null || result == Builtin.NotImplemented) { > try { > result = BinaryOp(cntx, arrayother, this, NpyDefs.NpyArray_Ops.npy_op_equal); > } catch (NotImplementedException) { > result = Builtin.NotImplemented; > } > } > > if (result == Builtin.NotImplemented) { > if (type == NpyDefs.NPY_TYPES.NPY_VOID) { > if (Dtype != arrayother.Dtype) { > return false; > } > if (Dtype.HasNames) { > object res = null; > foreach (string name in Dtype.Names) { > ndarray a1 = NpyArray.EnsureAnyArray(this[name]); > ndarray a2 = NpyArray.EnsureAnyArray(arrayother[name]); > object eq = a1.__eq__(cntx, a2); > if (res == null) { > res = eq; > } else { > res = BinaryOp(cntx, res, eq, NpyDefs.NpyArray_Ops.npy_op_logical_and); > } > } > if (res == null) { > throw new ArgumentException("No fields found"); > } > return res; > } > result = NpyCoreApi.CompareStringArrays(this, arrayother, NpyDefs.NPY_COMPARE_OP.NPY_EQ); > } else { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_EQ); > } > } > return result; > } > > public object __req__(CodeContext cntx, object o) { > return __eq__(cntx, o); > } > > public object __ne__(CodeContext cntx, object o) { > if (o == null) { > return true; > } > NpyDefs.NPY_TYPES type = Dtype.TypeNum; > ndarray arrayother = o as ndarray; > if (arrayother == null) { > // Try to convert to an array. Return not equal on failure > try { > if (type == NpyDefs.NPY_TYPES.NPY_OBJECT) { > type = NpyDefs.NPY_TYPES.NPY_NOTYPE; > } > arrayother = NpyArray.FromAny(o, NpyCoreApi.DescrFromType(type), flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); > if (arrayother == null) { > return true; > } > } catch { > return true; > } > } > > object result = BinaryOp(cntx, this, arrayother, NpyDefs.NpyArray_Ops.npy_op_not_equal); > if (result == Builtin.NotImplemented) { > if (type == NpyDefs.NPY_TYPES.NPY_VOID) { > if (Dtype != arrayother.Dtype) { > return false; > } > if (Dtype.HasNames) { > object res = null; > foreach (string name in Dtype.Names) { > ndarray a1 = NpyArray.EnsureAnyArray(this[name]); > ndarray a2 = NpyArray.EnsureAnyArray(arrayother[name]); > object eq = a1.__ne__(cntx, a2); > if (res == null) { > res = eq; > } else { > res = BinaryOp(cntx, res, eq, NpyDefs.NpyArray_Ops.npy_op_logical_or); > } > } > if (res == null) { > throw new ArgumentException("No fields found"); > } > return res; > } > result = NpyCoreApi.CompareStringArrays(this, arrayother, NpyDefs.NPY_COMPARE_OP.NPY_NE); > } else { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_NE); > } > } > return result; > } > > public object __rne__(CodeContext cntx, object o) { > return __ne__(cntx, o); > } > > public object __lt__(CodeContext cntx, object o) { > object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_less); > if (result == Builtin.NotImplemented) { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_LT); > } > return result; > } > > public object __rlt__(CodeContext cntx, object o) { > return __ge__(cntx, o); > } > > public object __le__(CodeContext cntx, object o) { > object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_less_equal); > if (result == Builtin.NotImplemented) { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_LE); > } > return result; > } > > public object __rle__(CodeContext cntx, object o) { > return __gt__(cntx, o); > } > > public object __gt__(CodeContext cntx, object o) { > object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_greater); > if (result == Builtin.NotImplemented) { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_GT); > } > return result; > } > > public object __rgt__(CodeContext cntx, object o) { > return __le__(cntx, o); > } > > public object __ge__(CodeContext cntx, object o) { > object result = BinaryOp(cntx, this, o, NpyDefs.NpyArray_Ops.npy_op_greater_equal); > if (result == Builtin.NotImplemented) { > result = strings_compare(o, NpyDefs.NPY_COMPARE_OP.NPY_GE); > } > return result; > } > > public object __rge__(CodeContext cntx, object o) { > return __lt__(cntx, o); > } > > private object strings_compare(object o, NpyDefs.NPY_COMPARE_OP op) { > if (NpyDefs.IsString(Dtype.TypeNum)) { > ndarray self = this; > ndarray array_other = NpyArray.FromAny(o, flags: NpyDefs.NPY_BEHAVED | NpyDefs.NPY_ENSUREARRAY); > if (self.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_UNICODE && > array_other.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING) { > dtype dt = new dtype(self.Dtype); > dt.ElementSize = array_other.Dtype.ElementSize*4; > array_other = NpyCoreApi.FromArray(array_other, dt, 0); > } else if (self.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING && > array_other.Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_UNICODE) { > dtype dt = new dtype(array_other.Dtype); > dt.ElementSize = self.Dtype.ElementSize * 4; > self = NpyCoreApi.FromArray(self, dt, 0); > } > return ArrayReturn(NpyCoreApi.CompareStringArrays(self, array_other, op)); > } > return Builtin.NotImplemented; > } > > public object __int__(CodeContext cntx) { > if (Size != 1) { > throw new ArgumentException("only length 1 arrays can be converted to scalars"); > } > return NpyUtil_Python.CallBuiltin(cntx, "int", GetItem(0)); > } > > public object __long__(CodeContext cntx) { > if (Size != 1) { > throw new ArgumentException("only length 1 arrays can be converted to scalars"); > } > return NpyUtil_Python.CallBuiltin(cntx, "long", GetItem(0)); > } > > public object __float__(CodeContext cntx) { > if (Size != 1) { > throw new ArgumentException("only length 1 arrays can be converted to scalars"); > } > return NpyUtil_Python.CallBuiltin(cntx, "float", GetItem(0)); > } > > public object __floordiv__(CodeContext cntx, object o) { > return BinaryOp(null, this, o, NpyDefs.NpyArray_Ops.npy_op_floor_divide); > } > > public object __truediv__(CodeContext cntx, object o) { > return BinaryOp(null, this, o, NpyDefs.NpyArray_Ops.npy_op_true_divide); > } > > public object __complex__(CodeContext cntx) { > if (Size != 1) { > throw new ArgumentException("only length 1 arrays can be converted to scalars"); > } > return NpyUtil_Python.CallBuiltin(cntx, "complex", GetItem(0)); > } > > public bool __nonzero__() { > return (bool)this.any(); > } > > public static explicit operator bool(ndarray arr) { > int val = NpyCoreApi.ArrayBool(arr); > if (val < 0) { > NpyCoreApi.CheckError(); > return false; > } else { > return val != 0; > } > } > > public static explicit operator int(ndarray arr) { > object val = arr.__int__(null); > if (val is int) { > return (int)val; > } else { > throw new OverflowException(); > } > } > > public static explicit operator BigInteger(ndarray arr) { > return (BigInteger)arr.__long__(null); > } > > public static explicit operator double(ndarray arr) { > return (double)arr.__float__(null); > } > > public static explicit operator Complex(ndarray arr) { > return (Complex)arr.__complex__(null); > } > > #endregion > > #region indexing > > public object this[int index] { > get { > return ArrayItem((long)index); > } > } > > public object this[long index] { > get { > return ArrayItem(index); > } > } > > public object this[IntPtr index] { > get { > return ArrayItem(index.ToInt64()); > } > } > > public object this[BigInteger index] { > get { > long lIndex = (long)index; > return ArrayItem(lIndex); > } > } > > public Object this[params object[] args] { > get { > if (args == null) { > args = new object[] { null }; > } else { > if (args.Length == 1 && args[0] is PythonTuple) { > args = ((IEnumerable)args[0]).ToArray(); > } > > if (args.Length == 1 && args[0] is string) { > string field = (string)args[0]; > return ArrayReturn(NpyCoreApi.GetField(this, field)); > } > } > using (NpyIndexes indexes = new NpyIndexes()) > { > NpyUtil_IndexProcessing.IndexConverter(args, indexes); > if (indexes.IsSingleItem(ndim)) > { > // Optimization for single item index. > long offset = 0; > Int64[] dims = Dims; > Int64[] s = Strides; > for (int i = 0; i < ndim; i++) > { > long d = dims[i]; > long val = indexes.GetIntPtr(i).ToInt64(); > if (val < 0) > { > val += d; > } > if (val < 0 || val >= d) > { > throw new IndexOutOfRangeException(); > } > offset += val * s[i]; > } > return Dtype.ToScalar(this, offset); > } else if (indexes.IsMultiField) { > // Special case for multiple fields, transfer control back to Python. > // See PyArray_Subscript in mapping.c of the CPython API for similar. > return NpyUtil_Python.CallFunction(NpyUtil_Python.DefaultContext, "numpy.core._internal", > "_index_fields", this, args); > } > > > // General subscript case. > NpyCoreApi.Incref(Array); > ndarray result = NpyCoreApi.DecrefToInterface( > NpyCoreApi.ArraySubscript(this, indexes)); > NpyCoreApi.Decref(Array); > > if (result.ndim == 0) { > // We only want to return a scalar if there are not elipses > bool noelipses = true; > int n = indexes.NumIndexes; > for (int i = 0; i < n; i++) { > NpyIndexes.NpyIndexTypes t = indexes.IndexType(i); > if (t == NpyIndexes.NpyIndexTypes.ELLIPSIS || > t == NpyIndexes.NpyIndexTypes.STRING || > t == NpyIndexes.NpyIndexTypes.BOOL) { > noelipses = false; > break; > } > } > if (noelipses) { > return result.Dtype.ToScalar(this); > } > } > return result; > } > } > set { > if (!ChkFlags(NpyDefs.NPY_WRITEABLE)) { > throw new RuntimeException("array is not writeable."); > } > > if (args == null) { > args = new object[] { null }; > } else { > if (args.Length == 1 && args[0] is PythonTuple) { > PythonTuple pt = (PythonTuple)args[0]; > args = pt.ToArray(); > } > > if (args.Length == 1 && args[0] is string) { > string field = (string)args[0]; > if (!ChkFlags(NpyDefs.NPY_WRITEABLE)) { > throw new RuntimeException("array is not writeable."); > } > IntPtr descr; > int offset = NpyCoreApi.GetFieldOffset(Dtype, field, out descr); > if (offset < 0) { > throw new ArgumentException(String.Format("field name '{0}' not found.", field)); > } > NpyArray.SetField(this, descr, offset, value); > return; > } > } > > > using (NpyIndexes indexes = new NpyIndexes()) > { > NpyUtil_IndexProcessing.IndexConverter(args, indexes); > > // Special case for boolean on 0-d arrays. > if (ndim == 0 && indexes.NumIndexes == 1 && indexes.IndexType(0) == NpyIndexes.NpyIndexTypes.BOOL) > { > if (indexes.GetBool(0)) > { > SetItem(value, 0); > } > return; > } > > // Special case for single assignment. > long single_offset = indexes.SingleAssignOffset(this); > if (single_offset >= 0) > { > // This is a single item assignment. Use SetItem. > SetItem(value, single_offset); > return; > } > > if (indexes.IsSimple) > { > ndarray view = null; > try { > if (GetType() == typeof(ndarray)) { > view = NpyCoreApi.IndexSimple(this, indexes); > } else { > // Call through python to let the subtype returns the correct view > // TODO: Do we really need this? Why only for set with simple indexing? > CodeContext cntx = PythonOps.GetPythonTypeContext(DynamicHelpers.GetPythonType(this)); > object item = PythonOps.GetIndex(cntx, this, new PythonTuple(args)); > view = (item as ndarray); > if (view == null) { > throw new RuntimeException("Getitem not returning array"); > } > } > > NpyArray.CopyObject(view, value); > } finally { > if (view != null) { > view.Dispose(); > } > } > } > else > { > ndarray array_value = NpyArray.FromAny(value, Dtype, 0, 0, NpyDefs.NPY_FORCECAST, null); > try { > NpyCoreApi.Incref(array_value.Array); > if (NpyCoreApi.IndexFancyAssign(this, indexes, array_value) < 0) { > NpyCoreApi.CheckError(); > } > } finally { > NpyCoreApi.Decref(array_value.Array); > } > } > } > } > } > > #endregion > > #region properties > > /// > /// Number of dimensions in the array > /// > public int ndim { > get { return Marshal.ReadInt32(core, NpyCoreApi.ArrayOffsets.off_nd); } > } > > /// > /// Returns the size of each dimension as a tuple. > /// > public object shape { > get { return NpyUtil_Python.ToPythonTuple(this.Dims); } > set { > IntPtr[] shape = NpyUtil_ArgProcessing.IntpArrConverter(value); > NpyCoreApi.SetShape(this, shape); > } > } > > > /// > /// Total number of elements in the array. > /// > public object size { > get { return NpyCoreApi.ArraySize(this).ToPython(); } > } > > public PythonBuffer data { > get { > throw new NotImplementedException(); > } > } > > /// > /// Returns the reference count of the core array object. Used for debugging only. > /// > public int __coreRefCount__ { get { return Marshal.ReadInt32(Array, NpyCoreApi.Offset_RefCount); } } > > > /// > /// The type descriptor object for this array > /// > public dtype Dtype { > get { > if (core == IntPtr.Zero) return null; > IntPtr descr = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_descr); > return NpyCoreApi.ToInterface(descr); > } > set { > NpyCoreApi.ArraySetDescr(this, value); > } > } > > > /// > /// The type descriptor object for this array > /// > public object dtype { > get { > return this.Dtype; > } > set { > dtype descr = value as dtype; > if (descr == null) { > descr = NpyDescr.DescrConverter(NpyUtil_Python.DefaultContext, value); > } > NpyCoreApi.ArraySetDescr(this, descr); > } > } > > /// > /// Flags for this array > /// > public flagsobj flags { > get { > return new flagsobj(this); > } > } > > /// > /// Returns an array of the stride of each dimension. > /// > public Int64[] Strides { > get { return NpyCoreApi.GetArrayDimsOrStrides(this, false); } > } > > public PythonTuple strides { > get { return NpyUtil_Python.ToPythonTuple(Strides); } > } > > public object real { > get { > return NpyCoreApi.GetReal(this); > } > set { > ndarray val = NpyArray.FromAny(value, null, 0, 0, 0, null); > NpyCoreApi.MoveInto(NpyCoreApi.GetReal(this), val); > } > } > > public object imag { > get { > if (IsComplex) { > return NpyCoreApi.GetImag(this); > } else { > // TODO: np.zeros_like when we have it. > ndarray result = Copy(); > result.flat = 0; > return result; > } > } > set { > if (IsComplex) { > ndarray val = NpyArray.FromAny(value, null, 0, 0, 0, null); > NpyCoreApi.MoveInto(NpyCoreApi.GetImag(this), val); > } else { > throw new ArgumentTypeException("array does not have an imaginary part to set."); > } > } > } > > public object flat { > get { > return NpyCoreApi.IterNew(this); > } > set { > // Assing like a.flat[:] = value > flatiter it = NpyCoreApi.IterNew(this); > it[new Slice(null)] = value; > } > } > > public object @base { > get { > // TODO: Handle non-array bases > return BaseArray; > } > } > > public int itemsize { > get { > return Dtype.ElementSize; > } > } > > public object nbytes { > get { > return NpyUtil_Python.ToPython(itemsize*Size); > } > } > > public ndarray T { > get { > return Transpose(); > } > } > > public object ctypes { > get { > return NpyUtil_Python.CallFunction(null, "numpy.core._internal", > "_ctypes", this, UnsafeAddress.ToPython()); > } > } > > #endregion > > #region methods > > public int dump(CodeContext cntx, object file) { > if (file is string) { > file = NpyUtil_Python.CallBuiltin(cntx, "open", file, "wb"); > } > NpyUtil_Python.CallFunction(cntx, "cPickle", "dump", this, file, 2); > return 0; > } > > public object dumps(CodeContext cntx) { > return NpyUtil_Python.CallFunction(cntx, "cPickle", "dumps", this, 2); > } > > public object all(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(All(iAxis, @out)); > } > > public object any(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(Any(iAxis, @out)); > } > > public object argmax(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(ArgMax(iAxis, @out)); > } > > public object argmin(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(ArgMin(iAxis, @out)); > } > > public object argsort(object axis = null, string kind = null, object order = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis, -1); > NpyDefs.NPY_SORTKIND sortkind = NpyUtil_ArgProcessing.SortkindConverter(kind); > > if (order != null) { > throw new NotImplementedException("Sort field order not yet implemented."); > } > > return ArrayReturn(ArgSort(iAxis, sortkind)); > } > > public object astype(CodeContext cntx, object dtype = null) { > dtype d = NpyDescr.DescrConverter(cntx, dtype); > if (d == this.Dtype) { > return this; > } > if (this.Dtype.HasNames) { > // CastToType doesn't work properly for > // record arrays, so we use FromArray. > int flags = NpyDefs.NPY_FORCECAST; > if (IsFortran) { > flags |= NpyDefs.NPY_FORTRAN; > } > return NpyCoreApi.FromArray(this, d, flags); > } > return NpyCoreApi.CastToType(this, d, this.IsFortran); > } > > public ndarray byteswap(bool inplace = false) { > return NpyCoreApi.Byteswap(this, inplace); > } > > private static string[] chooseArgNames = { "out", "mode" }; > > public object choose([ParamDictionary] IDictionary kwargs, > params object[] args){ > IEnumerable choices; > if (args == null) { > choices = new object[0]; > } > else if (args.Length == 1 && args[0] is IEnumerable) { > choices = (IEnumerable)args[0]; > } else { > choices = args; > } > object[] kargs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], chooseArgNames, kwargs); > ndarray aout = kargs[0] as ndarray; > NpyDefs.NPY_CLIPMODE clipMode = NpyUtil_ArgProcessing.ClipmodeConverter(kargs[1]); > return ArrayReturn(Choose(choices, aout, clipMode)); > } > > public object clip(object min = null, object max = null, ndarray @out = null) { > return Clip(min, max, @out); > } > > public ndarray compress(object condition, object axis = null, ndarray @out = null) { > ndarray aCondition = NpyArray.FromAny(condition, null, 0, 0, 0, null); > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > > if (aCondition.ndim != 1) { > throw new ArgumentException("condition must be 1-d array"); > } > > ndarray indexes = aCondition.NonZero()[0]; > return TakeFrom(indexes, iAxis, @out, NpyDefs.NPY_CLIPMODE.NPY_RAISE); > } > > public ndarray conj(ndarray @out = null) { > return conjugate(@out); > } > > public ndarray conjugate(ndarray @out = null) { > return Conjugate(@out); > } > > public object copy(object order = null) { > return ArrayReturn(Copy(order)); > } > > public ndarray Copy(object order = null) { > NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); > return NpyCoreApi.NewCopy(this, eOrder); > } > > public object cumprod(CodeContext cntx, object axis = null, object dtype = null, > ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return CumProd(iAxis, rtype, @out); > } > > public object cumsum(CodeContext cntx, object axis = null, object dtype = null, > ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return CumSum(iAxis, rtype, @out); > } > > > public ndarray diagonal(int offset = 0, int axis1 = 0, int axis2 = 1) { > return Diagonal(offset, axis1, axis2); > } > > public object dot(object other) { > return ModuleMethods.dot(this, other); > } > > public void fill(object scalar) { > FillWithScalar(scalar); > } > > public ndarray flatten(object order = null) { > NpyDefs.NPY_ORDER eOrder = > NpyUtil_ArgProcessing.OrderConverter(order); > return Flatten(eOrder); > } > > public ndarray getfield(CodeContext cntx, object dtype, int offset = 0) { > NumpyDotNet.dtype dt = NpyDescr.DescrConverter(cntx, dtype); > return NpyCoreApi.GetField(this, dt, offset); > } > > public object item(params object[] args) { > if (args != null && args.Length == 1 && args[0] is PythonTuple) { > PythonTuple t = (PythonTuple)args[0]; > args = t.ToArray(); > } > if (args == null || args.Length == 0) { > if (ndim == 0 || Size == 1) { > return GetItem(0); > } else { > throw new ArgumentException("can only convert an array of size 1 to a Python scalar"); > } > } else { > using (NpyIndexes indexes = new NpyIndexes()) { > NpyUtil_IndexProcessing.IndexConverter(args, indexes); > if (args.Length == 1) { > if (indexes.IndexType(0) != NpyIndexes.NpyIndexTypes.INTP) { > throw new ArgumentException("invalid integer"); > } > // Do flat indexing > return Flat.Get(indexes.GetIntPtr(0)); > } else { > if (indexes.IsSingleItem(ndim)) { > long offset = indexes.SingleAssignOffset(this); > return GetItem(offset); > } else { > throw new ArgumentException("Incorrect number of indices for the array"); > } > } > } > } > } > > public void itemset(params object[] args) { > // Convert args to value and args > if (args == null || args.Length == 0) { > throw new ArgumentException("itemset must have at least one argument"); > } > object value = args.Last(); > args = args.Take(args.Length - 1).ToArray(); > > if (args.Length == 1 && args[0] is PythonTuple) { > PythonTuple t = (PythonTuple)args[0]; > args = t.ToArray(); > } > if (args.Length == 0) { > if (ndim == 0 || Size == 1) { > SetItem(value, 0); > } else { > throw new ArgumentException("can only convert an array of size 1 to a Python scalar"); > } > } else { > using (NpyIndexes indexes = new NpyIndexes()) { > NpyUtil_IndexProcessing.IndexConverter(args, indexes); > if (args.Length == 1) { > if (indexes.IndexType(0) != NpyIndexes.NpyIndexTypes.INTP) { > throw new ArgumentException("invalid integer"); > } > // Do flat indexing > Flat.SingleAssign(indexes.GetIntPtr(0), value); > } else { > if (indexes.IsSingleItem(ndim)) { > long offset = indexes.SingleAssignOffset(this); > SetItem(value, offset); > } else { > throw new ArgumentException("Incorrect number of indices for the array"); > } > } > } > } > } > > public object max(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(Max(iAxis, @out)); > } > > public object mean(CodeContext cntx, object axis = null, object dtype = null, > ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return Mean(iAxis, GetTypeDouble(this.Dtype, rtype), @out); > } > > public object min(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(Min(iAxis, @out)); > } > > public ndarray newbyteorder(string endian = null) { > dtype newtype = NpyCoreApi.DescrNewByteorder(Dtype, NpyUtil_ArgProcessing.ByteorderConverter(endian)); > return NpyCoreApi.View(this, newtype, null); > } > > public PythonTuple nonzero() { > return new PythonTuple(NonZero()); > } > > public object prod(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return ArrayReturn(Prod(iAxis, rtype, @out)); > } > > public object ptp(object axis = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return Ptp(iAxis, @out); > } > > public void put(object indices, object values, object mode = null) { > ndarray aIndices; > ndarray aValues; > NpyDefs.NPY_CLIPMODE eMode; > > aIndices = (indices as ndarray); > if (aIndices == null) { > aIndices = NpyArray.FromAny(indices, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), > 0, 0, NpyDefs.NPY_CARRAY, null); > } > aValues = (values as ndarray); > if (aValues == null) { > aValues = NpyArray.FromAny(values, Dtype, 0, 0, NpyDefs.NPY_CARRAY, null); > } > eMode = NpyUtil_ArgProcessing.ClipmodeConverter(mode); > PutTo(aValues, aIndices, eMode); > } > > public ndarray ravel(object order = null) { > NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); > return Ravel(eOrder); > } > > public object repeat(object repeats, object axis = null) { > ndarray aRepeats = (repeats as ndarray); > if (aRepeats == null) { > aRepeats = NpyArray.FromAny(repeats, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), > 0, 0, NpyDefs.NPY_CARRAY, null); > } > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > return ArrayReturn(Repeat(aRepeats, iAxis)); > } > > private static string[] reshapeKeywords = { "order" }; > > public ndarray reshape([ParamDictionary] IDictionary kwds, params object[] args) { > object[] keywordArgs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], reshapeKeywords, kwds); > NpyDefs.NPY_ORDER order = NpyUtil_ArgProcessing.OrderConverter(keywordArgs[0]); > IntPtr[] newshape; > // TODO: Add NpyArray_View call for (None) case. (Why?) > if (args == null) { > newshape = new IntPtr[0]; > } else if (args.Length == 1 && (args[0] is IList || args[0] is ndarray)) { > newshape = NpyUtil_ArgProcessing.IntpListConverter((IEnumerable)args[0]); > } else { > newshape = NpyUtil_ArgProcessing.IntpListConverter(args); > } > return NpyCoreApi.Newshape(this, newshape, order); > } > > public ndarray Reshape(IEnumerable shape, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_ANYORDER) { > return NpyCoreApi.Newshape(this, shape.Select(x => (IntPtr)x).ToArray(), order); > } > > private static string[] resizeKeywords = { "refcheck" }; > > public void resize([ParamDictionary] IDictionary kwds, params object[] args) { > object[] keywordArgs = NpyUtil_ArgProcessing.BuildArgsArray(new object[0], resizeKeywords, kwds); > bool refcheck = NpyUtil_ArgProcessing.BoolConverter(keywordArgs[0]); > IntPtr[] newshape; > > if (args == null || args.Length == 0 || args.Length == 1 && args[0] == null) { > return; > } > if (args.Length == 1 && args[0] is IList) { > newshape = NpyUtil_ArgProcessing.IntpListConverter((IList)args[0]); > } else { > newshape = NpyUtil_ArgProcessing.IntpListConverter(args); > } > Resize(newshape, refcheck, NpyDefs.NPY_ORDER.NPY_CORDER); > } > > public object round(int decimals = 0, ndarray @out = null) { > return Round(decimals, @out); > } > > public object searchsorted(object keys, string side = null) { > NpyDefs.NPY_SEARCHSIDE eSide = NpyUtil_ArgProcessing.SearchsideConverter(side); > ndarray aKeys = (keys as ndarray); > if (aKeys == null) { > aKeys = NpyArray.FromAny(keys, NpyArray.FindArrayType(keys, Dtype, NpyDefs.NPY_MAXDIMS), > 0, 0, NpyDefs.NPY_CARRAY, null); > } > return ArrayReturn(SearchSorted(aKeys, eSide)); > } > > public void setfield(CodeContext cntx, object value, object dtype, int offset = 0) { > dtype d = NpyDescr.DescrConverter(cntx, dtype); > NpyArray.SetField(this, d.Descr, offset, value); > } > > public void setflags(object write = null, object align = null, object uic = null) { > int flags = RawFlags; > if (align != null) { > bool bAlign = NpyUtil_ArgProcessing.BoolConverter(align); > if (bAlign) { > flags |= NpyDefs.NPY_ALIGNED; > } else { > if (!NpyCoreApi.IsAligned(this)) { > throw new ArgumentException("cannot set aligned flag of mis-aligned array to True"); > } > flags &= ~NpyDefs.NPY_ALIGNED; > } > } > if (uic != null) { > bool bUic = NpyUtil_ArgProcessing.BoolConverter(uic); > if (bUic) { > throw new ArgumentException("cannot set UPDATEIFCOPY flag to True"); > } else { > NpyCoreApi.ClearUPDATEIFCOPY(this); > } > } > if (write != null) { > bool bWrite = NpyUtil_ArgProcessing.BoolConverter(write); > if (bWrite) { > if (!NpyCoreApi.IsWriteable(this)) { > throw new ArgumentException("cannot set WRITEABLE flag to true on this array"); > } > flags |= NpyDefs.NPY_WRITEABLE; > } else { > flags &= ~NpyDefs.NPY_WRITEABLE; > } > } > RawFlags = flags; > } > > public void sort(int axis = -1, string kind = null, object order = null) { > NpyDefs.NPY_SORTKIND sortkind = NpyUtil_ArgProcessing.SortkindConverter(kind); > if (order != null) { > throw new NotImplementedException("Field sort order not yet implemented."); > } > Sort(axis, sortkind); > } > > public object squeeze() { > return Squeeze(); > } > > public object std(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null, int ddof = 0) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return Std(iAxis, GetTypeDouble(this.Dtype, rtype), @out, false, ddof); > } > > public object sum(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return ArrayReturn(Sum(iAxis, rtype, @out)); > } > > > public ndarray swapaxes(int a1, int a2) { > return SwapAxes(a1, a2); > } > > public ndarray swapaxes(object a1, object a2) { > int iA1 = NpyUtil_ArgProcessing.IntConverter(a1); > int iA2 = NpyUtil_ArgProcessing.IntConverter(a2); > return SwapAxes(iA1, iA2); > } > > > public object take(object indices, > object axis = null, > ndarray @out = null, > object mode = null) { > ndarray aIndices; > int iAxis; > NpyDefs.NPY_CLIPMODE cMode; > > aIndices = (indices as ndarray); > if (aIndices == null) { > aIndices = NpyArray.FromAny(indices, NpyCoreApi.DescrFromType(NpyDefs.NPY_INTP), > 1, 0, NpyDefs.NPY_CONTIGUOUS, null); > } > iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > cMode = NpyUtil_ArgProcessing.ClipmodeConverter(mode); > return ArrayReturn(TakeFrom(aIndices, iAxis, @out, cMode)); > } > > public void tofile(CodeContext cntx, PythonFile file, string sep = null, string format = null) { > ToFile(cntx, file, sep, format); > } > > public void tofile(CodeContext cntx, string filename, string sep = null, string format = null) { > PythonFile f = (PythonFile)NpyUtil_Python.CallBuiltin(cntx, "open", filename, "wb"); > try { > tofile(cntx, f, sep, format); > } finally { > f.close(); > } > } > > public object tolist() { > if (ndim == 0) { > return GetItem(0); > } else { > List result = new List(); > long size = Dims[0]; > for (long i = 0; i < size; i++) { > result.append(NpyCoreApi.ArrayItem(this, i).tolist()); > } > return result; > } > } > > public Bytes tostring(object order = null) { > NpyDefs.NPY_ORDER eOrder = NpyUtil_ArgProcessing.OrderConverter(order); > return ToString(eOrder); > } > > public object trace(CodeContext cntx, int offset = 0, int axis1 = 0, int axis2 = 1, > object dtype = null, ndarray @out = null) { > ndarray diag = Diagonal(offset, axis1, axis2); > return diag.sum(cntx, dtype:dtype, @out:@out); > } > > public ndarray transpose(params object[] args) { > if (args == null || args.Length == 0 || args.Length == 1 && args[0] == null) { > return Transpose(); > } else if (args.Length == 1 && args[0] is IList) { > return Transpose(NpyUtil_ArgProcessing.IntpListConverter((IList)args[0])); > } else { > return Transpose(NpyUtil_ArgProcessing.IntpListConverter(args)); > } > } > > public object var(CodeContext cntx, object axis = null, object dtype = null, ndarray @out = null, int ddof = 0) { > int iAxis = NpyUtil_ArgProcessing.AxisConverter(axis); > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return Std(iAxis, GetTypeDouble(this.Dtype, rtype), @out, true, ddof); > } > > > public ndarray view(CodeContext cntx, object dtype = null, object type = null) { > if (dtype != null && type == null) { > if (IsNdarraySubtype(dtype)) { > type = dtype; > dtype = null; > } > } > > if (type != null && !IsNdarraySubtype(type)) { > throw new ArgumentException("Type must be a subtype of ndarray."); > } > dtype rtype = null; > if (dtype != null) { > rtype = NpyDescr.DescrConverter(cntx, dtype); > } > return NpyCoreApi.View(this, rtype, type); > } > > #endregion > > #endregion > > > public long Size { > get { return NpyCoreApi.ArraySize(this).ToInt64(); } > } > > public ndarray Real { > get { return NpyCoreApi.GetReal(this); } > } > > public ndarray Imag { > get { return NpyCoreApi.GetImag(this); } > } > > public override string ToString() { > return StrFunction(this); > } > > public flatiter Flat { > get { > return NpyCoreApi.IterNew(this); > } > } > > public ndarray NewCopy(NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) { > return NpyCoreApi.NewCopy(this, order); > } > > > /// > /// Directly accesses the array memory and returns the object at that > /// offset. No checks are made, caller can easily crash the program > /// or retrieve garbage data. > /// > /// Offset into data array in bytes > /// Contents of the location > internal object GetItem(long offset) { > return Dtype.f.GetItem(offset, this); > } > > > /// > /// Directly sets a given location in the data array. No checks are > /// made to make sure the offset is sensible or the data is valid in > /// anyway -- caller beware. > /// 'internal' because this is a security vulnerability. > /// > /// Value to write > /// Offset into array in bytes > internal void SetItem(object src, long offset) { > Dtype.f.SetItem(src, offset, this); > } > > > /// > /// Handle to the core representation. > /// > public IntPtr Array { > get { return core; } > } > > > /// > /// Base address of the array data memory. Use with caution. > /// > internal IntPtr DataAddress { > get { return Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_data); } > } > > /// > /// Returns an array of the sizes of each dimension. This property allocates > /// a new array with each call and must make a managed-to-native call so it's > /// worth caching the results if used in a loop. > /// > public Int64[] Dims { > get { return NpyCoreApi.GetArrayDimsOrStrides(this, true); } > } > > > /// > /// Returns the stride of a given dimension. For looping over all dimensions, > /// use 'strides'. This is more efficient if only one dimension is of interest. > /// > /// Dimension to query > /// Data stride in bytes > public long Stride(int dimension) { > return NpyCoreApi.GetArrayStride(this, dimension); > } > > > /// > /// True if memory layout of array is contiguous > /// > public bool IsContiguous { > get { return ChkFlags(NpyDefs.NPY_CONTIGUOUS); } > } > > public bool IsOneSegment { > get { return ndim == 0 || ChkFlags(NpyDefs.NPY_FORTRAN) || ChkFlags(NpyDefs.NPY_CARRAY); } > } > > /// > /// True if memory layout is Fortran order, false implies C order > /// > public bool IsFortran { > get { return ChkFlags(NpyDefs.NPY_FORTRAN) && ndim > 1; } > } > > public bool IsNotSwapped { > get { return Dtype.IsNativeByteOrder; } > } > > public bool IsByteSwapped { > get { return !IsNotSwapped; } > } > > public bool IsCArray { > get { return ChkFlags(NpyDefs.NPY_CARRAY) && IsNotSwapped; } > } > > public bool IsCArray_RO { > get { return ChkFlags(NpyDefs.NPY_CARRAY_RO) && IsNotSwapped; } > } > > public bool IsFArray { > get { return ChkFlags(NpyDefs.NPY_FARRAY) && IsNotSwapped; } > } > > public bool IsFArray_RO { > get { return ChkFlags(NpyDefs.NPY_FARRAY_RO) && IsNotSwapped; } > } > > public bool IsBehaved { > get { return ChkFlags(NpyDefs.NPY_BEHAVED) && IsNotSwapped; } > } > > public bool IsBehaved_RO { > get { return ChkFlags(NpyDefs.NPY_ALIGNED) && IsNotSwapped; } > } > > internal bool IsComplex { > get { return NpyDefs.IsComplex(Dtype.TypeNum); } > } > > internal bool IsInteger { > get { return NpyDefs.IsInteger(Dtype.TypeNum); } > } > > public bool IsFlexible { > get { return NpyDefs.IsFlexible(Dtype.TypeNum); } > } > > public bool IsWriteable { > get { return ChkFlags(NpyDefs.NPY_WRITEABLE); } > } > > public bool IsString { > get { return Dtype.TypeNum == NpyDefs.NPY_TYPES.NPY_STRING; } > } > > > /// > /// TODO: What does this return? > /// > public int ElementStrides { > get { return NpyCoreApi.ElementStrides(this); } > } > > public bool StridingOk(NpyDefs.NPY_ORDER order) { > return order == NpyDefs.NPY_ORDER.NPY_ANYORDER || > order == NpyDefs.NPY_ORDER.NPY_CORDER && IsContiguous || > order == NpyDefs.NPY_ORDER.NPY_FORTRANORDER && IsFortran; > } > > private bool ChkFlags(int flag) { > return ((RawFlags & flag) == flag); > } > > // These operators are useful from other C# code and also turn into the > // appropriate Python functions (+ goes to __add__, etc). > > #region IEnumerable interface > > public IEnumerator GetEnumerator() { > return new ndarray_Enumerator(this); > } > > System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { > return new ndarray_Enumerator(this); > } > > #endregion > > #region Internal methods > > internal long Length { > get { > return Dims[0]; > } > } > > public static object ArrayReturn(ndarray a) { > if (a.ndim == 0) { > return a.Dtype.ToScalar(a); > } else { > return a; > } > } > private string BuildStringRepr(bool repr) { > // Equivalent to array_repr_builtin (arrayobject.c) > StringBuilder sb = new StringBuilder(); > if (repr) sb.Append("array("); > DumpData(sb, this.Dims, this.Strides, 0, 0); > > if (repr) { > if (NpyDefs.IsExtended(this.Dtype.TypeNum)) { > sb.AppendFormat(", '{0}{1}')", (char)Dtype.Type, this.Dtype.ElementSize); > } else { > sb.AppendFormat(", '{0}')", (char)Dtype.Type); > } > } > return sb.ToString(); > } > > /// > /// Recursively walks the array and appends a representation of each element > /// to the passed string builder. Square brackets delimit each array dimension. > /// > /// StringBuilder instance to append to > /// Array of size of each dimension > /// Offset in bytes to reach next element in each dimension > /// Index of the current dimension (starts at 0, recursively counts up) > /// Byte offset into data array, starts at 0 > private void DumpData(StringBuilder sb, long[] dimensions, long[] strides, > int dimIdx, long offset) { > > if (dimIdx == ndim) { > Object value = Dtype.f.GetItem(offset, this); > if (value == null) { > sb.Append("None"); > } else { > sb.Append((string)PythonOps.Repr(NpyUtil_Python.DefaultContext, value)); > } > } else { > sb.Append('['); > for (int i = 0; i < dimensions[dimIdx]; i++) { > DumpData(sb, dimensions, strides, dimIdx + 1, > offset + strides[dimIdx] * i); > if (i < dimensions[dimIdx] - 1) { > sb.Append(", "); > } > } > sb.Append(']'); > } > } > > #region Direct Typed Accessors > // BEWARE! These are direct memory accessors and ignore the type of the array. > // Yes, you can do clever things and yes, you can hang yourself, too. > > public unsafe int ReadAsInt32(long index) { > return *(int*)((long)UnsafeAddress + OffsetToItem(index)); > } > > public unsafe void WriteAsInt32(long index, int v) { > *(int*)((long)UnsafeAddress + OffsetToItem(index)) = v; > } > > public unsafe IntPtr ReadAsIntPtr(long index) { > return *(IntPtr*)((long)UnsafeAddress + OffsetToItem(index)); > } > > public unsafe void WriteAsIntPtr(long index, IntPtr v) { > *(IntPtr*)((long)UnsafeAddress + OffsetToItem(index)) = v; > } > > public unsafe long ReadAsInt64(long index) { > return *(long*)((long)UnsafeAddress + OffsetToItem(index)); > } > > public unsafe void WriteAsInt64(long index, long v) { > *(long*)((long)UnsafeAddress + OffsetToItem(index)) = v; > } > > public unsafe float ReadAsFloat(long index) { > return *(float*)((long)UnsafeAddress + OffsetToItem(index)); > } > > public unsafe void WriteAsFloat(long index, float v) { > *(float*)((long)UnsafeAddress + OffsetToItem(index)) = v; > } > > public unsafe double ReadAsDouble(long index) { > return *(double*)((long)UnsafeAddress + OffsetToItem(index)); > } > > public unsafe void WriteAsDouble(long index, double v) { > *(double*)((long)UnsafeAddress + OffsetToItem(index)) = v; > } > > private long OffsetToItem(long index) { > if (ndim > 1) { > throw new IndexOutOfRangeException("Only 1-d arrays are currently supported. Please use ArrayItem()."); > } > > long dim0 = Dims[0]; > if (index < 0) { > index += dim0; > } > if (index < 0 || index >= dim0) { > throw new IndexOutOfRangeException("Index out of range"); > } > return index * Strides[0]; > } > #endregion > > /// > /// Indexes an array by a single long and returns either an item or a sub-array. > /// > /// The index into the array > object ArrayItem(long index) { > if (ndim == 1) { > return Dtype.ToScalar(this, OffsetToItem(index)); > } else { > return NpyCoreApi.ArrayItem(this, index); > } > } > > internal Int32 RawFlags { > get { > return Marshal.ReadInt32(Array + NpyCoreApi.ArrayOffsets.off_flags); > } > set { > Marshal.WriteInt32(Array + NpyCoreApi.ArrayOffsets.off_flags, value); > } > } > > internal static dtype GetTypeDouble(dtype dtype1, dtype dtype2) { > if (dtype2 != null) { > return dtype2; > } > if (dtype1.TypeNum < NpyDefs.NPY_TYPES.NPY_FLOAT) { > return NpyCoreApi.DescrFromType(NpyDefs.NPY_TYPES.NPY_DOUBLE); > } else { > return dtype1; > } > } > > private static bool IsNdarraySubtype(object type) { > if (type == null) { > return false; > } > PythonType pt = type as PythonType; > if (pt == null) { > return false; > } > return PythonOps.IsSubClass(pt, DynamicHelpers.GetPythonTypeFromType(typeof(ndarray))); > } > > /// > /// Pointer to the internal memory. Should be used with great caution - memory > /// is native memory, not managed memory. > /// > public IntPtr UnsafeAddress { > get { return Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_data); } > } > > internal ndarray BaseArray { > get { > IntPtr p = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array); > if (p == IntPtr.Zero) { > return null; > } else { > return NpyCoreApi.ToInterface(p); > } > } > set { > lock (this) { > IntPtr p = Marshal.ReadIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array); > if (p != IntPtr.Zero) { > NpyCoreApi.Decref(p); > } > NpyCoreApi.Incref(value.core); > Marshal.WriteIntPtr(core, NpyCoreApi.ArrayOffsets.off_base_array, value.core); > } > } > } > > /// > /// Copies data into the array from 'data'. Offset is the offset into this > /// array's data space in bytes. The number of bytes copied is based on the > /// element size of the array's dtype. > /// > /// Offset into this array's data (bytes) > /// Memory address to copy the data from > /// If true data is byte-swapped during copy > internal unsafe void CopySwapIn(long offset, void* data, bool swap) { > NpyCoreApi.CopySwapIn(this, offset, data, swap); > } > > /// > /// Copies data out of the array into 'data'. Offset is the offset into this > /// array's data space in bytes. Number of bytes copied is based on the > /// element size of the array's dtype. > /// > /// Offset into array's data in bytes > /// Memory address to copy the data to > /// If true, results are byte-swapped from the array's image > internal unsafe void CopySwapOut(long offset, void* data, bool swap) { > NpyCoreApi.CopySwapOut(this, offset, data, swap); > } > > #endregion > > #region Memory pressure handling > > // The GC only knows about the managed memory that has been allocated, > // not the large pool of native array data. This means that the GC > // may not run even if we are about to run out of memory. Adding > // memory pressure tells the GC how much native memory is associated > // with managed objects. > > /// > /// Track the total pressure allocated by numpy. This is just for > /// error checking and to make sure it goes back to 0 in the end. > /// > private static long TotalMemPressure = 0; > > > /// > /// Memory pressure reserved for this instance in bytes to be released on dispose. > /// > private long reservedMemPressure = 0; > > internal static void IncreaseMemoryPressure(ndarray arr) { > if (arr.flags.owndata) { > int newBytes = (int)(arr.Size * arr.Dtype.ElementSize); > if (newBytes == 0) { > return; > } > > // Stupid annoying hack. What happens is the finalizer queue > // is processed by a low-priority background thread and can fall > // behind, allowing memory to be filled if the primary thread is > // creating garbage faster than the finalizer thread is cleaning > // it up. This is a heuristic to cause the main thread to pause > // when needed. All of this is necessary because the ndarray > // object defines a finalizer, which most .NET objects don't have > // and .NET doesn't appear well optimized for cases with huge > // numbers of finalizable objects. > // TODO: What do we do for a collection heuristic for 64-bit? Don't > // want to collect too often but don't want to page either. > if (IntPtr.Size == 4 && > (TotalMemPressure > 1500000000 || TotalMemPressure + newBytes > 1700000000)) { > System.GC.Collect(); > System.GC.WaitForPendingFinalizers(); > } > > System.Threading.Interlocked.Add(ref TotalMemPressure, newBytes); > System.GC.AddMemoryPressure(newBytes); > arr.reservedMemPressure = newBytes; > //Console.WriteLine("Added {0} bytes of pressure, now {1}", > // newBytes, TotalMemPressure); > } > } > > internal static void DecreaseMemoryPressure(long numBytes) { > System.Threading.Interlocked.Add(ref TotalMemPressure, -numBytes); > if (numBytes > 0) { > System.GC.RemoveMemoryPressure(numBytes); > } > //Console.WriteLine("Removed {0} bytes of pressure, now {1}", > // newBytes, TotalMemPressure); > } > > #endregion > > #region Buffer protocol > > public IExtBufferProtocol GetBuffer(NpyBuffer.PyBuf flags) { > return new ndarrayBufferAdapter(this, flags); > } > > public IExtBufferProtocol GetPyBuffer(int flags) { > return GetBuffer((NpyBuffer.PyBuf)flags); > } > > /// > /// Adapts an instance that implements IBufferProtocol and IPythonBufferable > /// to the IExtBufferProtocol. > /// > private class ndarrayBufferAdapter : IExtBufferProtocol > { > internal ndarrayBufferAdapter(ndarray a, NpyBuffer.PyBuf flags) { > arr = a; > > if ((flags & NpyBuffer.PyBuf.C_CONTIGUOUS) == NpyBuffer.PyBuf.C_CONTIGUOUS && > !arr.ChkFlags(NpyDefs.NPY_C_CONTIGUOUS)) { > throw new ArgumentException("ndarray is not C-continuous"); > } > if ((flags & NpyBuffer.PyBuf.F_CONTIGUOUS) == NpyBuffer.PyBuf.F_CONTIGUOUS && > !arr.ChkFlags(NpyDefs.NPY_F_CONTIGUOUS)) { > throw new ArgumentException("ndarray is not F-continuous"); > } > if ((flags & NpyBuffer.PyBuf.ANY_CONTIGUOUS) == NpyBuffer.PyBuf.ANY_CONTIGUOUS && > !arr.IsOneSegment) { > throw new ArgumentException("ndarray is not contiguous"); > } > if ((flags & NpyBuffer.PyBuf.STRIDES) != NpyBuffer.PyBuf.STRIDES && > (flags & NpyBuffer.PyBuf.ND) == NpyBuffer.PyBuf.ND && > !arr.ChkFlags(NpyDefs.NPY_C_CONTIGUOUS)) { > throw new ArgumentException("ndarray is not c-contiguous"); > } > if ((flags & NpyBuffer.PyBuf.WRITABLE) == NpyBuffer.PyBuf.WRITABLE && > !arr.IsWriteable) { > throw new ArgumentException("ndarray is not writable"); > } > > readOnly = ((flags & NpyBuffer.PyBuf.WRITABLE) == 0); > ndim = ((flags & NpyBuffer.PyBuf.ND) == 0) ? 0 : arr.ndim; > shape = ((flags & NpyBuffer.PyBuf.ND) == 0) ? null : arr.Dims; > strides = ((flags & NpyBuffer.PyBuf.STRIDES) == 0) ? null : arr.Strides; > > if ((flags & NpyBuffer.PyBuf.FORMAT) == 0) { > // Force an array of unsigned bytes. > itemCount = arr.Size * arr.Dtype.ElementSize; > itemSize = sizeof(byte); > format = null; > } else { > itemCount = arr.Length; > itemSize = arr.Dtype.ElementSize; > format = NpyCoreApi.GetBufferFormatString(arr); > } > } > > #region IExtBufferProtocol > > long IExtBufferProtocol.ItemCount { > get { return itemCount; } > } > > string IExtBufferProtocol.Format { > get { return format; } > } > > int IExtBufferProtocol.ItemSize { > get { return itemSize; } > } > > int IExtBufferProtocol.NumberDimensions { > get { return ndim; } > } > > bool IExtBufferProtocol.ReadOnly { > get { return readOnly; } > } > > IList IExtBufferProtocol.Shape { > get { return shape; } > } > > long[] IExtBufferProtocol.Strides { > get { return strides; } > } > > long[] IExtBufferProtocol.SubOffsets { > get { > long[] s = new long[ndim]; > for (int i = 0; i < s.Length; i++) s[i] = -1; > return s; > } > } > > IntPtr IExtBufferProtocol.UnsafeAddress { > get { return arr.DataAddress; } > } > > /// > /// Total number of bytes in the array > /// > long IExtBufferProtocol.Size { > get { return arr.Size; } > } > > #endregion > > private readonly ndarray arr; > private readonly bool readOnly; > private readonly long itemCount; > private readonly string format; > private readonly int ndim; > private readonly int itemSize; > private readonly IList shape; > private readonly long[] strides; > > } > > #endregion > } > > internal class ndarray_Enumerator : IEnumerator > { > public ndarray_Enumerator(ndarray a) { > arr = a; > index = -1; > } > > public object Current { > get { return arr[(int)index]; } > } > > public void Dispose() { > arr = null; > } > > > public bool MoveNext() { > index += 1; > return (index < arr.Dims[0]); > } > > public void Reset() { > index = -1; > } > > private ndarray arr; > private long index; > } > } Only in pure-numpy/src/numpy/NumpyDotNet/NpyAccessLib: bin diff -r pure-numpy/src/numpy/NumpyDotNet/NpyAccessLib/NpyAccessLib.vcxproj numpy-refactor/numpy/NumpyDotNet/NpyAccessLib/NpyAccessLib.vcxproj 1,248c1,239 < ??? < < < < Debug < Win32 < < < Debug < x64 < < < Release < Win32 < < < Release < x64 < < < < {1B601FD0-B1EA-41DC-A899-07093F298467} < NpyAccessLib < SAK < SAK < SAK < SAK < < < < DynamicLibrary < true < MultiByte < Intel C++ Compiler XE 14.0 < < < DynamicLibrary < true < MultiByte < Intel C++ Compiler XE 14.0 < < < DynamicLibrary < false < true < MultiByte < Intel C++ Compiler XE 14.0 < < < DynamicLibrary < false < true < MultiByte < Intel C++ Compiler XE 14.0 < < < < < < < < < < < < < < < < < < < .dll < $(SolutionDir)bin\ < < < true < < < .dll < $(ProjectDir)bin < < true < < < .dll < $(SolutionDir)bin\ < false < BuildCompile < < < .dll < $(SolutionDir)bin\ < false < BuildCompile < < < < Level3 < Disabled < ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) < true < _WINDLL;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < ProgramDatabase < < < true < $(OutDir)\ndarray.lib;%(AdditionalDependencies) < < < < < < < < < < < < < < < < < < < < < ./generate_code.bat < < < Generating umath_generated.h < < < < < Level3 < Disabled < ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) < true < _WINDLL;_WIN64;%(PreprocessorDefinitions) < MultiThreadedDebugDLL < ProgramDatabase < < < true < ndarray.lib;%(AdditionalDependencies) < < < $(SolutionDir)\PythonNumPy\libndarray\windows\bin;%(AdditionalLibraryDirectories) < < < < < < < < < < < < < < < < < < < ./generate_code.bat < < < Generating umath_generated.h < < < < < Level3 < MaxSpeed < true < true < ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) < true < < < true < true < true < $(OutDir)\ndarray.lib;%(AdditionalDependencies) < < < < < < < < < < < < < < < < < Level3 < MaxSpeed < true < true < ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) < true < _WINDLL;_WIN64;%(PreprocessorDefinitions) < < < true < true < true < $(OutDir)\ndarray.lib;%(AdditionalDependencies) < < < < < < < < < < < < < < < < < < true < true < true < true < < < < < < < < < --- > ??? > > > > Debug > Win32 > > > Debug > x64 > > > Release > Win32 > > > Release > x64 > > > > {1B601FD0-B1EA-41DC-A899-07093F298467} > NpyAccessLib > > > > DynamicLibrary > true > MultiByte > > > DynamicLibrary > true > MultiByte > > > DynamicLibrary > false > true > MultiByte > > > DynamicLibrary > false > true > MultiByte > > > > > > > > > > > > > > > > > > > .dll > $(SolutionDir)bin\ > > > true > > > .dll > $(SolutionDir)bin\ > > true > > > .dll > $(SolutionDir)bin\ > false > BuildCompile > > > .dll > $(SolutionDir)bin\ > false > BuildCompile > > > > Level3 > Disabled > ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) > true > _WINDLL;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > ProgramDatabase > > > true > $(OutDir)\ndarray.lib;%(AdditionalDependencies) > > > > > > > > > > > > > > > > > > > > > ./generate_code.bat > > > Generating umath_generated.h > > > > > Level3 > Disabled > ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) > true > _WINDLL;_WIN64;%(PreprocessorDefinitions) > MultiThreadedDebugDLL > ProgramDatabase > > > true > $(OutDir)\ndarray.lib;%(AdditionalDependencies) > > > > > > > > > > > > > > > > > > > > > ./generate_code.bat > > > Generating umath_generated.h > > > > > Level3 > MaxSpeed > true > true > ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) > true > > > true > true > true > $(OutDir)\ndarray.lib;%(AdditionalDependencies) > > > > > > > > > > > > > > > > > Level3 > MaxSpeed > true > true > ../../../libndarray/windows;../../../libndarray/src;%(AdditionalIncludeDirectories) > true > _WINDLL;_WIN64;%(PreprocessorDefinitions) > > > true > true > true > $(OutDir)\ndarray.lib;%(AdditionalDependencies) > > > > > > > > > > > > > > > > > > true > true > true > true > > > > > > > > > Only in numpy-refactor/numpy/NumpyDotNet/NpyAccessLib: NpyAccessLib.vcxproj.user Only in pure-numpy/src/numpy/NumpyDotNet/NpyAccessLib: NpyAccessLib.vcxproj.vspscc Only in pure-numpy/src/numpy/NumpyDotNet/NpyAccessLib: __umath_generated.c Only in pure-numpy/src/numpy/NumpyDotNet/NpyAccessLib: x64 diff -r pure-numpy/src/numpy/NumpyDotNet/NpyArray.cs numpy-refactor/numpy/NumpyDotNet/NpyArray.cs 12c12 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { 786,789c786,787 < public static ndarray Empty(long[] shape, dtype type = null, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) < { < if (type == null) < { --- > public static ndarray Empty(long[] shape, dtype type = null, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) { > if (type == null) { 795,796c793 < public static ndarray Zeros(long[] shape, dtype type = null, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) < { --- > public static ndarray Zeros(long[] shape, dtype type = null, NpyDefs.NPY_ORDER order = NpyDefs.NPY_ORDER.NPY_CORDER) { 799,800c796 < if (type.IsObject) < { --- > if (type.IsObject) { 803,805c799 < } < else < { --- > } else { 808d801 < diff -r pure-numpy/src/numpy/NumpyDotNet/NpyBuffer.cs numpy-refactor/numpy/NumpyDotNet/NpyBuffer.cs 10c10 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/NpyCoreApi.cs numpy-refactor/numpy/NumpyDotNet/NpyCoreApi.cs 1,2871c1,2868 < using System; < using System.Collections.Generic; < using System.Diagnostics; < using System.Linq; < using System.Security; < using System.Text; < using System.Runtime.InteropServices; < using System.Runtime.CompilerServices; < using System.Threading; < using IronPython.Runtime; < using IronPython.Runtime.Types; < using IronPython.Runtime.Operations; < < namespace Cascade.VTFA.Python.Numpy < { < /// < /// NpyCoreApi class wraps the interactions with the libndarray core library. It < /// also makes use of NpyAccessLib.dll for a few functions that must be < /// implemented in native code. < /// < /// TODO: This class is going to get very large. Not sure if it's better to < /// try to break it up or just use partial classes and split it across < /// multiple files. < /// < [SuppressUnmanagedCodeSecurity] < public static class NpyCoreApi < { < < /// < /// Stupid hack to allow us to pass an already-allocated wrapper instance < /// through the interfaceData argument and tell the wrapper creation functions < /// like ArrayNewWrapper to use an existing instance instead of creating a new < /// one. This is necessary because CPython does construction as an allocator < /// but .NET only triggers code after allocation. < /// < internal struct UseExistingWrapper < { < internal object Wrapper; < } < < #region API Wrappers < < /// < /// Returns a new descriptor object for internal types or user defined < /// types. < /// < internal static dtype DescrFromType(NpyDefs.NPY_TYPES type) { < // NOTE: No GIL wrapping here, function is re-entrant and includes locking. < IntPtr descr = NpyArray_DescrFromType((int)type); < CheckError(); < return DecrefToInterface(descr); < } < < internal static bool IsAligned(ndarray arr) { < lock (GlobalIterpLock) { < return Npy_IsAligned(arr.Array) != 0; < } < } < < internal static bool IsWriteable(ndarray arr) { < lock (GlobalIterpLock) { < return Npy_IsWriteable(arr.Array) != 0; < } < } < < internal static byte OppositeByteOrder { < get { return oppositeByteOrder; } < } < < internal static byte NativeByteOrder { < get { return (oppositeByteOrder == '<') ? (byte)'>' : (byte)'<'; } < } < < internal static dtype SmallType(dtype t1, dtype t2) { < lock (GlobalIterpLock) { < return ToInterface( < NpyArray_SmallType(t1.Descr, t2.Descr)); < } < } < < < /// < /// Moves the contents of src into dest. Arrays are assumed to have the < /// same number of elements, but can be different sizes and different types. < /// < /// Destination array < /// Source array < internal static void MoveInto(ndarray dest, ndarray src) { < lock (GlobalIterpLock) { < if (NpyArray_MoveInto(dest.Array, src.Array) == -1) { < CheckError(); < } < } < } < < < /// < /// Allocates a new array and returns the ndarray wrapper < /// < /// Type descriptor < /// Num of dimensions < /// Size of each dimension < /// True if Fortran layout, false for C layout < /// Newly allocated array < internal static ndarray AllocArray(dtype descr, int numdim, long[] dimensions, < bool fortran) { < IntPtr nativeDims = IntPtr.Zero; < < Incref(descr.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArrayAccess_AllocArray(descr.Descr, numdim, dimensions, fortran)); < } < } < < /// < /// Constructs a new array from an input array and descriptor type. The < /// Underlying array may or may not be copied depending on the requirements. < /// < /// Source array < /// Desired type < /// New array flags < /// New array (may be source array) < internal static ndarray FromArray(ndarray src, dtype descr, int flags) { < if (descr == null && flags == 0) return src; < if (descr == null) descr = src.Dtype; < if (descr != null) NpyCoreApi.Incref(descr.Descr); < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_FromArray(src.Array, descr.Descr, flags)); < } < } < < < /// < /// Returns an array with the size or stride of each dimension in the given array. < /// < /// The array < /// True returns size of each dimension, false returns stride of each dimension < /// Array w/ an array size or stride for each dimension < internal static Int64[] GetArrayDimsOrStrides(ndarray arr, bool getDims) { < Int64[] retArr; < < IntPtr srcPtr = Marshal.ReadIntPtr(arr.Array, getDims ? ArrayOffsets.off_dimensions : ArrayOffsets.off_strides); < retArr = new Int64[arr.ndim]; < unsafe { < fixed (Int64* dimMem = retArr) { < lock (GlobalIterpLock) { < if (!GetIntpArray(srcPtr, arr.ndim, dimMem)) { < throw new IronPython.Runtime.Exceptions.RuntimeException("Error getting array dimensions."); < } < } < } < } < return retArr; < } < < internal static Int64[] GetArrayDims(broadcast iter, bool getDims) { < Int64[] retArr; < < // off_dimensions is to start of array, not pointer to array! < IntPtr srcPtr = iter.Iter + MultiIterOffsets.off_dimensions; < retArr = new Int64[iter.nd]; < unsafe { < fixed (Int64* dimMem = retArr) { < lock (GlobalIterpLock) { < if (!GetIntpArray(srcPtr, iter.nd, dimMem)) { < throw new IronPython.Runtime.Exceptions.RuntimeException("Error getting iterator dimensions."); < } < } < } < } < return retArr; < } < < internal static ndarray NewFromDescr(dtype descr, long[] dims, long[] strides, int flags, object interfaceData) < { < if (interfaceData == null) < { < Incref(descr.Descr); < lock (GlobalIterpLock) < { < IntPtr p = NewFromDescrThunk(descr.Descr, dims.Length, flags, dims, strides, IntPtr.Zero, < IntPtr.Zero); < < return DecrefToInterface(p); < } < } < else < { < GCHandle h = AllocGCHandle(interfaceData); < try { < Incref(descr.Descr); < Monitor.Enter(GlobalIterpLock); < return DecrefToInterface(NewFromDescrThunk(descr.Descr, dims.Length, < flags, dims, strides, IntPtr.Zero, GCHandle.ToIntPtr(h))); < } finally { < Monitor.Exit(GlobalIterpLock); < FreeGCHandle(h); < } < } < } < < internal static ndarray NewFromDescr(dtype descr, long[] dims, long[] strides, IntPtr data, < int flags, object interfaceData) { < if (interfaceData == null) { < Incref(descr.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface( < NewFromDescrThunk(descr.Descr, dims.Length, flags, dims, strides, data, IntPtr.Zero)); < } < } else { < GCHandle h = AllocGCHandle(interfaceData); < try { < Incref(descr.Descr); < Monitor.Enter(GlobalIterpLock); < return DecrefToInterface(NewFromDescrThunk(descr.Descr, dims.Length, < flags, dims, strides, IntPtr.Zero, GCHandle.ToIntPtr(h))); < } finally { < Monitor.Exit(GlobalIterpLock); < FreeGCHandle(h); < } < } < } < < internal static flatiter IterNew(ndarray ao) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_IterNew(ao.Array)); < } < } < < internal static ndarray IterSubscript(flatiter iter, NpyIndexes indexes) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_IterSubscript(iter.Iter, indexes.Indexes, indexes.NumIndexes)); < } < } < < internal static void IterSubscriptAssign(flatiter iter, NpyIndexes indexes, ndarray val) { < lock (GlobalIterpLock) { < if (NpyArray_IterSubscriptAssign(iter.Iter, indexes.Indexes, indexes.NumIndexes, val.Array) < 0) { < CheckError(); < } < } < } < < internal static ndarray FlatView(ndarray a) < { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_FlatView(a.Array) < ); < } < } < < < /// < /// Creates a multiterator < /// < /// Sequence of objects to iterate over < /// Pointer to core multi-iterator structure < internal static IntPtr MultiIterFromObjects(IEnumerable objs) { < return MultiIterFromArrays(objs.Select(x => NpyArray.FromAny(x))); < } < < internal static IntPtr MultiIterFromArrays(IEnumerable arrays) { < IntPtr[] coreArrays = arrays.Select(x => { Incref(x.Array); return x.Array; }).ToArray(); < IntPtr result; < < lock (GlobalIterpLock) { < result = NpyArrayAccess_MultiIterFromArrays(coreArrays, coreArrays.Length); < } < CheckError(); < return result; < } < < internal static ufunc GetNumericOp(NpyDefs.NpyArray_Ops op) { < IntPtr ufuncPtr; < < lock (GlobalIterpLock) { < ufuncPtr = NpyArray_GetNumericOp((int)op); < } < return ToInterface(ufuncPtr); < } < < #if NOTDEF < internal static object GenericUnaryOp(ndarray a1, ufunc f, ndarray ret = null) { < // TODO: We need to do the error handling and wrapping of outputs. < Incref(a1.Array); < Incref(f.UFunc); < if (ret != null) { < Incref(ret.Array); < } < IntPtr result = NpyArray_GenericUnaryFunction(a1.Array, f.UFunc, < (ret == null ? IntPtr.Zero : ret.Array)); < ndarray rval = DecrefToInterface(result); < Decref(a1.Array); < Decref(f.UFunc); < if (ret == null) { < return ndarray.ArrayReturn(rval); < } else { < Decref(ret.Array); < return rval; < } < } < < internal static object GenericBinaryOp(ndarray a1, ndarray a2, ufunc f, ndarray ret = null) { < //ndarray arr = new ndarray[] { a1, a2, ret }; < //return GenericFunction(f, arr, null); < // TODO: We need to do the error handling and wrapping of outputs. < Incref(f.UFunc); < < IntPtr result = NpyArray_GenericBinaryFunction(a1.Array, a2.Array, f.UFunc, < (ret == null ? IntPtr.Zero : ret.Array)); < ndarray rval = DecrefToInterface(result); < Decref(f.UFunc); < < if (ret == null) { < return ndarray.ArrayReturn(rval); < } else { < return rval; < } < } < < #endif < < internal static object GenericReduction(ufunc f, ndarray arr, < ndarray indices, ndarray ret, int axis, dtype otype, ufunc.ReduceOp op) { < if (indices != null) { < Incref(indices.Array); < } < < ndarray rval; < lock (GlobalIterpLock) { < rval = DecrefToInterface( < NpyUFunc_GenericReduction(f.UFunc, arr.Array, < (indices != null) ? indices.Array : IntPtr.Zero, < (ret != null) ? ret.Array : IntPtr.Zero, < axis, (otype != null) ? otype.Descr : IntPtr.Zero, (int)op)); < } < if (rval != null) { < // TODO: Call array wrap processing: ufunc_object.c:1011 < } < return ndarray.ArrayReturn(rval); < } < < internal class PrepareArgs < { < internal CodeContext cntx; < internal Action prepare; < internal object[] args; < internal Exception ex; < } < < internal static int PrepareCallback(IntPtr ufunc, IntPtr arrays, IntPtr prepare_args) { < PrepareArgs args = (PrepareArgs)GCHandleFromIntPtr(prepare_args).Target; < ufunc f = ToInterface(ufunc); < ndarray[] arrs = new ndarray[f.nargs]; < // Copy the data into the array < for (int i = 0; i < arrs.Length; i++) { < arrs[i] = DecrefToInterface(Marshal.ReadIntPtr(arrays, IntPtr.Size * i)); < } < try { < args.prepare(args.cntx, f, arrs, args.args); < } catch (Exception ex) { < args.ex = ex; < return -1; < } finally { < // Copy the arrays back < for (int i = 0; i < arrs.Length; i++) { < IntPtr coreArray = arrs[i].Array; < Incref(coreArray); < Marshal.WriteIntPtr(arrays, IntPtr.Size * i, arrs[i].Array); < } < } < return 0; < } < < internal static void GenericFunction(CodeContext cntx, ufunc f, ndarray[] arrays, NpyDefs.NPY_TYPES[] sig, < Action prepare_outputs, object[] args) { < // Convert the typenums < int[] rtypenums = null; < int ntypenums = 0; < if (sig != null) { < rtypenums = sig.Cast().ToArray(); < ntypenums = rtypenums.Length; < } < unsafe { < // Convert and INCREF the arrays < IntPtr* mps = stackalloc IntPtr[arrays.Length]; < for (int i = 0; i < arrays.Length; i++) { < ndarray a = arrays[i]; < if (a == null) { < mps[i] = IntPtr.Zero; < } else { < IntPtr p = a.Array; < NpyCoreApi.Incref(p); < mps[i] = p; < } < } < < if (prepare_outputs != null) { < PrepareArgs pargs = new PrepareArgs { cntx = cntx, prepare = prepare_outputs, args = args, ex = null }; < GCHandle h = AllocGCHandle(pargs); < try { < int val; < Incref(f.UFunc); < lock (GlobalIterpLock) { < val = NpyUFunc_GenericFunction(f.UFunc, f.nargs, mps, ntypenums, rtypenums, 0, < PrepareCallback, GCHandle.ToIntPtr(h)); < } < if (val < 0) { < CheckError(); < if (pargs.ex != null) { < throw pargs.ex; < } < } < } finally { < // Release the handle < FreeGCHandle(h); < // Convert the args back. < for (int i = 0; i < arrays.Length; i++) { < if (mps[i] != IntPtr.Zero) { < arrays[i] = DecrefToInterface(mps[i]); < } else { < arrays[i] = null; < } < } < Decref(f.UFunc); < } < } else { < try { < Incref(f.UFunc); < Monitor.Enter(GlobalIterpLock); < if (NpyUFunc_GenericFunction(f.UFunc, f.nargs, mps, ntypenums, rtypenums, 0, < null, IntPtr.Zero) < 0) { < CheckError(); < } < } finally { < Monitor.Exit(GlobalIterpLock); < // Convert the args back. < for (int i = 0; i < arrays.Length; i++) { < if (mps[i] != IntPtr.Zero) { < arrays[i] = DecrefToInterface(mps[i]); < } else { < arrays[i] = null; < } < } < Decref(f.UFunc); < } < } < } < } < < internal static ndarray Byteswap(ndarray arr, bool inplace) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_Byteswap(arr.Array, inplace ? (byte)1 : (byte)0)); < } < } < < public static ndarray CastToType(ndarray arr, dtype d, bool fortran) { < Incref(d.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_CastToType(arr.Array, d.Descr, (fortran ? 1 : 0))); < } < } < < internal static ndarray CheckAxis(ndarray arr, ref int axis, int flags) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_CheckAxis(arr.Array, ref axis, flags)); < } < } < < internal static void CopyAnyInto(ndarray dest, ndarray src) { < lock (GlobalIterpLock) { < if (NpyArray_CopyAnyInto(dest.Array, src.Array) < 0) { < CheckError(); < } < } < } < < internal static void DescrDestroyFields(IntPtr fields) { < lock (GlobalIterpLock) { < NpyDict_Destroy(fields); < } < } < < < internal static ndarray GetField(ndarray arr, dtype d, int offset) { < Incref(d.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_GetField(arr.Array, d.Descr, offset)); < } < } < < internal static ndarray GetImag(ndarray arr) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_GetImag(arr.Array)); < } < } < < internal static ndarray GetReal(ndarray arr) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_GetReal(arr.Array)); < } < } < internal static ndarray GetField(ndarray arr, string name) { < NpyArray_DescrField field = GetDescrField(arr.Dtype, name); < dtype field_dtype = ToInterface(field.descr); < return GetField(arr, field_dtype, field.offset); < } < < internal static ndarray Newshape(ndarray arr, IntPtr[] dims, NpyDefs.NPY_ORDER order) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArrayAccess_Newshape(arr.Array, dims.Length, dims, (int)order)); < } < } < < internal static void SetShape(ndarray arr, IntPtr[] dims) { < lock (GlobalIterpLock) { < if (NpyArrayAccess_SetShape(arr.Array, dims.Length, dims) < 0) { < CheckError(); < } < } < } < < internal static void SetState(ndarray arr, IntPtr[] dims, NpyDefs.NPY_ORDER order, string rawdata) { < lock (GlobalIterpLock) { < NpyArrayAccess_SetState(arr.Array, dims.Length, dims, (int)order, rawdata, (rawdata != null) ? rawdata.Length : 0); < } < CheckError(); < } < < < internal static ndarray NewView(dtype d, int nd, IntPtr[] dims, IntPtr[] strides, < ndarray arr, IntPtr offset, bool ensure_array) { < Incref(d.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_NewView(d.Descr, nd, dims, strides, arr.Array, offset, ensure_array ? 1 : 0)); < } < } < < /// < /// Returns a copy of the passed array in the specified order (C, Fortran) < /// < /// Array to copy < /// Desired order < /// New array < internal static ndarray NewCopy(ndarray arr, NpyDefs.NPY_ORDER order) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_NewCopy(arr.Array, (int)order)); < } < } < < internal static NpyDefs.NPY_TYPES TypestrConvert(int elsize, byte letter) { < lock (GlobalIterpLock) { < return (NpyDefs.NPY_TYPES)NpyArray_TypestrConvert(elsize, (int)letter); < } < } < < internal static void AddField(IntPtr fields, IntPtr names, int i, < string name, dtype fieldType, int offset, string title) { < Incref(fieldType.Descr); < lock (GlobalIterpLock) { < if (NpyArrayAccess_AddField(fields, names, i, name, fieldType.Descr, offset, title) < 0) { < CheckError(); < } < } < } < < internal static NpyArray_DescrField GetDescrField(dtype d, string name) { < NpyArray_DescrField result; < lock (GlobalIterpLock) { < if (NpyArrayAccess_GetDescrField(d.Descr, name, out result) < 0) { < throw new ArgumentException(String.Format("Field {0} does not exist", name)); < } < } < return result; < } < < internal static dtype DescrNewVoid(IntPtr fields, IntPtr names, int elsize, int flags, int alignment) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArrayAccess_DescrNewVoid(fields, names, elsize, flags, alignment)); < } < } < < internal static dtype DescrNewSubarray(dtype basetype, IntPtr[] shape) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArrayAccess_DescrNewSubarray(basetype.Descr, shape.Length, shape)); < } < } < < internal static dtype DescrNew(dtype d) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_DescrNew(d.Descr)); < } < } < < internal static void GetBytes(ndarray arr, byte[] bytes, NpyDefs.NPY_ORDER order) { < lock (GlobalIterpLock) { < if (NpyArrayAccess_GetBytes(arr.Array, bytes, bytes.LongLength, (int)order) < 0) { < CheckError(); < } < } < } < < internal static void FillWithObject(ndarray arr, object obj) { < GCHandle h = AllocGCHandle(obj); < try { < Monitor.Enter(GlobalIterpLock); < if (NpyArray_FillWithObject(arr.Array, GCHandle.ToIntPtr(h)) < 0) { < CheckError(); < } < } finally { < Monitor.Exit(GlobalIterpLock); < FreeGCHandle(h); < } < } < < internal static void FillWithScalar(ndarray arr, ndarray zero_d_array) { < lock (GlobalIterpLock) { < if (NpyArray_FillWithScalar(arr.Array, zero_d_array.Array) < 0) { < CheckError(); < } < } < } < < internal static ndarray View(ndarray arr, dtype d, object subtype) { < IntPtr descr = (d == null ? IntPtr.Zero : d.Descr); < if (descr != IntPtr.Zero) { < Incref(descr); < } < if (subtype != null) { < GCHandle h = AllocGCHandle(subtype); < try { < Monitor.Enter(GlobalIterpLock); < return DecrefToInterface( < NpyArray_View(arr.Array, descr, GCHandle.ToIntPtr(h))); < } finally { < Monitor.Exit(GlobalIterpLock); < FreeGCHandle(h); < } < } < else { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_View(arr.Array, descr, IntPtr.Zero)); < } < } < } < < internal static ndarray ViewLike(ndarray arr, ndarray proto) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArrayAccess_ViewLike(arr.Array, proto.Array)); < } < } < < internal static ndarray Subarray(ndarray self, IntPtr dataptr) { < lock (GlobalIterpLock) { < return DecrefToInterface(NpyArray_Subarray(self.Array, dataptr)); < } < } < < internal static dtype DescrNewByteorder(dtype d, char order) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_DescrNewByteorder(d.Descr, (byte)order)); < } < } < < internal static void UpdateFlags(ndarray arr, int flagmask) { < lock (GlobalIterpLock) { < NpyArray_UpdateFlags(arr.Array, flagmask); < } < } < < /// < /// Calls the fill function on the array dtype. This takes the first 2 values in the array and fills the array < /// so the difference between each pair of elements is the same. < /// < /// < internal static void Fill(ndarray arr) { < lock (GlobalIterpLock) { < if (NpyArrayAccess_Fill(arr.Array) < 0) { < CheckError(); < } < } < } < < internal static void SetDateTimeInfo(dtype d, string units, int num, int den, int events) { < lock (GlobalIterpLock) { < if (NpyArrayAccess_SetDateTimeInfo(d.Descr, units, num, den, events) < 0) { < CheckError(); < } < } < } < < internal static dtype InheritDescriptor(dtype t1, dtype other) { < lock (GlobalIterpLock) { < return DecrefToInterface(NpyArrayAccess_InheritDescriptor(t1.Descr, other.Descr)); < } < } < < internal static bool EquivTypes(dtype d1, dtype d2) { < lock (GlobalIterpLock) { < return NpyArray_EquivTypes(d1.Descr, d2.Descr) != 0; < } < } < < internal static bool CanCastTo(dtype d1, dtype d2) { < lock (GlobalIterpLock) { < return NpyArray_CanCastTo(d1.Descr, d2.Descr); < } < } < < /// < /// Returns the PEP 3118 format encoding for the type of an array. < /// < /// Array to get the format string for < /// Format string < internal static string GetBufferFormatString(ndarray arr) { < IntPtr ptr; < lock (GlobalIterpLock) { < ptr = NpyArrayAccess_GetBufferFormatString(arr.Array); < } < < String s = Marshal.PtrToStringAnsi(ptr); < lock (GlobalIterpLock) { < NpyArrayAccess_Free(ptr); // ptr was allocated with malloc, not SysStringAlloc - don't use automatic marshalling < } < return s; < } < < < /// < /// Reads the specified text or binary file and produces an array from the content. Currently only < /// the file name is allowed and not a PythonFile or Stream type due to limitations in the core < /// (assumes FILE *). < /// < /// File to read < /// Type descriptor for the resulting array < /// Number of elements to read, less than zero reads all available < /// Element separator string for text files, null for binary files < /// Array of file contents < internal static ndarray ArrayFromFile(string fileName, dtype type, int count, string sep) { < lock (GlobalIterpLock) { < return DecrefToInterface(NpyArrayAccess_FromFile(fileName, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); < } < } < < < internal static ndarray ArrayFromString(string data, dtype type, int count, string sep) { < if (type != null) Incref(type.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface(NpyArray_FromString(data, (IntPtr)data.Length, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); < } < } < < internal static ndarray ArrayFromBytes(byte[] data, dtype type, int count, string sep) { < if (type != null) Incref(type.Descr); < lock (GlobalIterpLock) { < return DecrefToInterface(NpyArray_FromBytes(data, (IntPtr)data.Length, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); < } < } < < internal static ndarray CompareStringArrays(ndarray a1, ndarray a2, NpyDefs.NPY_COMPARE_OP op, < bool rstrip = false) { < lock (GlobalIterpLock) { < return DecrefToInterface( < NpyArray_CompareStringArrays(a1.Array, a2.Array, (int)op, rstrip ? 1 : 0)); < } < } < < // API Defintions: every native call is private and must currently be wrapped by a function < // that at least holds the global interpreter lock (GlobalInterpLock). < internal static int ElementStrides(ndarray arr) { lock (GlobalIterpLock) { return NpyArray_ElementStrides(arr.Array); } } < < internal static IntPtr ArraySubscript(ndarray arr, NpyIndexes indexes) { < lock (GlobalIterpLock) { return NpyArray_Subscript(arr.Array, indexes.Indexes, indexes.NumIndexes); } < } < < internal static void IndexDealloc(NpyIndexes indexes) { < lock (GlobalIterpLock) { NpyArray_IndexDealloc(indexes.Indexes, indexes.NumIndexes); } < } < < internal static IntPtr ArraySize(ndarray arr) { < lock (GlobalIterpLock) { return NpyArray_Size(arr.Array); } < } < < /// < /// Indexes an array by a single long and returns the sub-array. < /// < /// The index into the array. < /// The sub-array. < internal static ndarray ArrayItem(ndarray arr, long index) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_ArrayItem(arr.Array, (IntPtr)index)); < } < } < < internal static ndarray IndexSimple(ndarray arr, NpyIndexes indexes) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_IndexSimple(arr.Array, indexes.Indexes, indexes.NumIndexes)); < } < } < < internal static int IndexFancyAssign(ndarray dest, NpyIndexes indexes, ndarray values) { < lock (GlobalIterpLock) { return NpyArray_IndexFancyAssign(dest.Array, indexes.Indexes, indexes.NumIndexes, values.Array); } < } < < internal static int SetField(ndarray arr, IntPtr dtype, int offset, ndarray srcArray) { < lock (GlobalIterpLock) { return NpyArray_SetField(arr.Array, dtype, offset, srcArray.Array); } < } < < internal static void SetNumericOp(int op, ufunc ufunc) { < lock (GlobalIterpLock) { NpyArray_SetNumericOp(op, ufunc.UFunc); } < } < < internal static ndarray ArrayAll(ndarray arr, int axis, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_All(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray ArrayAny(ndarray arr, int axis, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Any(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray ArrayArgMax(ndarray self, int axis, ndarray ret) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_ArgMax(self.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray ArgSort(ndarray arr, int axis, NpyDefs.NPY_SORTKIND sortkind) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_ArgSort(arr.Array, axis, (int)sortkind)); < } < } < < internal static int ArrayBool(ndarray arr) { < lock (GlobalIterpLock) { return NpyArray_Bool(arr.Array); } < } < < internal static int ScalarKind(int typenum, ndarray arr) { < lock (GlobalIterpLock) { return NpyArray_ScalarKind(typenum, arr.Array); } < } < < internal static ndarray Choose(ndarray sel, ndarray[] arrays, ndarray ret = null, NpyDefs.NPY_CLIPMODE clipMode = NpyDefs.NPY_CLIPMODE.NPY_RAISE) { < lock (GlobalIterpLock) { < IntPtr[] coreArrays = arrays.Select(x => x.Array).ToArray(); < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_Choose(sel.Array, coreArrays, coreArrays.Length, < ret == null ? IntPtr.Zero : ret.Array, (int)clipMode)); < } < } < < internal static ndarray Conjugate(ndarray arr, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyArray_Conjugate(arr.Array, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray Correlate(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES typenum, int mode) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyArray_Correlate(arr1.Array, arr2.Array, (int)typenum, mode)); < } < } < < internal static ndarray Correlate2(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES typenum, int mode) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyArray_Correlate2(arr1.Array, arr2.Array, (int)typenum, mode)); < } < } < < internal static ndarray CopyAndTranspose(ndarray arr) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyArray_CopyAndTranspose(arr.Array)); < } < } < < internal static ndarray CumProd(ndarray arr, int axis, dtype rtype, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_CumProd(arr.Array, axis, < (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), < (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray CumSum(ndarray arr, int axis, dtype rtype, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_CumSum(arr.Array, axis, < (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), < (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static void DestroySubarray(IntPtr subarrayPtr) { < lock (GlobalIterpLock) { NpyArray_DestroySubarray(subarrayPtr); } < } < < internal static int DescrFindObjectFlag(dtype type) { < lock (GlobalIterpLock) { return NpyArray_DescrFindObjectFlag(type.Descr); } < } < < internal static ndarray Flatten(ndarray arr, NpyDefs.NPY_ORDER order) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_Flatten(arr.Array, (int)order)); < } < < internal static ndarray InnerProduct(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES type) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_InnerProduct(arr1.Array, arr2.Array, (int)type)); < } < } < < internal static ndarray LexSort(ndarray[] arrays, int axis) { < int n = arrays.Length; < IntPtr[] coreArrays = arrays.Select(x => x.Array).ToArray(); < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_LexSort(coreArrays, n, axis)); < } < } < < internal static ndarray MatrixProduct(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES type) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyArray_MatrixProduct(arr1.Array, arr2.Array, (int)type)); < } < } < < internal static ndarray ArrayMax(ndarray arr, int axis, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Max(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray ArrayMin(ndarray arr, int axis, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Min(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static ndarray[] NonZero(ndarray arr) { < int nd = arr.ndim; < IntPtr[] coreArrays = new IntPtr[nd]; < GCHandle h = NpyCoreApi.AllocGCHandle(arr); < try { < Monitor.Enter(GlobalIterpLock); < if (NpyCoreApi.NpyArray_NonZero(arr.Array, coreArrays, GCHandle.ToIntPtr(h)) < 0) { < NpyCoreApi.CheckError(); < } < } finally { < Monitor.Exit(GlobalIterpLock); < NpyCoreApi.FreeGCHandle(h); < } < return coreArrays.Select(x => NpyCoreApi.DecrefToInterface(x)).ToArray(); < } < < internal static ndarray Prod(ndarray arr, int axis, dtype rtype, ndarray ret = null) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_Prod(arr.Array, axis, < (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), < (ret == null ? IntPtr.Zero : ret.Array))); < } < } < < internal static int PutMask(ndarray arr, ndarray values, ndarray mask) { < lock (GlobalIterpLock) { < return NpyArray_PutMask(arr.Array, values.Array, mask.Array); < } < } < < internal static int PutTo(ndarray arr, ndarray values, ndarray indices, NpyDefs.NPY_CLIPMODE clipmode) { < lock (GlobalIterpLock) { < return NpyArray_PutTo(arr.Array, values.Array, indices.Array, (int)clipmode); < } < } < < < internal static ndarray Ravel(ndarray arr, NpyDefs.NPY_ORDER order) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Ravel(arr.Array, (int)order)); < } < } < < internal static ndarray Repeat(ndarray arr, ndarray repeats, int axis) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Repeat(arr.Array, repeats.Array, axis)); < } < } < < internal static ndarray Searchsorted(ndarray arr, ndarray keys, NpyDefs.NPY_SEARCHSIDE side) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_SearchSorted(arr.Array, keys.Array, (int)side)); < } < } < < internal static void Sort(ndarray arr, int axis, NpyDefs.NPY_SORTKIND sortkind) { < lock (GlobalIterpLock) { < if (NpyCoreApi.NpyArray_Sort(arr.Array, axis, (int)sortkind) < 0) { < NpyCoreApi.CheckError(); < } < } < } < < internal static ndarray Squeeze(ndarray arr) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Squeeze(arr.Array)); < } < } < < internal static ndarray Sum(ndarray arr, int axis, dtype rtype, ndarray ret = null) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_Sum(arr.Array, axis, < (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), < (ret == null ? IntPtr.Zero : ret.Array))); < } < < internal static ndarray SwapAxis(ndarray arr, int a1, int a2) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_SwapAxes(arr.Array, a1, a2)); < } < } < < internal static ndarray TakeFrom(ndarray arr, ndarray indices, int axis, ndarray ret, NpyDefs.NPY_CLIPMODE clipMode) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArray_TakeFrom(arr.Array, indices.Array, axis, (ret != null ? ret.Array : IntPtr.Zero), (int)clipMode) < ); < } < } < < internal static bool DescrIsNative(dtype type) { < lock (GlobalIterpLock) { < return NpyCoreApi.DescrIsNative(type.Descr) != 0; < } < } < < #endregion < < < #region C API Definitions < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_DescrNew(IntPtr descr); < internal static IntPtr DescrNewRaw(dtype d) { < lock (GlobalIterpLock) { return NpyArray_DescrNew(d.Descr); } < } < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_DescrFromType(Int32 type); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_SmallType(IntPtr descr1, IntPtr descr2); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern byte NpyArray_EquivTypes(IntPtr t1, IntPtr typ2); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_ElementStrides(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_MoveInto(IntPtr dest, IntPtr src); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_FromArray(IntPtr arr, IntPtr descr, int flags); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //private static extern void NpyArray_dealloc(IntPtr arr); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //private static extern void NpyArray_DescrDestroy(IntPtr arr); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //internal static extern void NpyArray_DescrDeallocNamesAndFields(IntPtr dtype); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Subarray(IntPtr arr, IntPtr dataptr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Subscript(IntPtr arr, IntPtr indexes, int n); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //private static extern int NpyArray_SubscriptAssign(IntPtr self, IntPtr indexes, int n, IntPtr value); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArray_IndexDealloc(IntPtr indexes, int n); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Size(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_ArrayItem(IntPtr array, IntPtr index); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_IndexSimple(IntPtr arr, IntPtr indexes, int n); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_IndexFancyAssign(IntPtr dest, IntPtr indexes, int n, IntPtr value_array); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_SetField(IntPtr arr, IntPtr descr, int offset, IntPtr val); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int Npy_IsAligned(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int Npy_IsWriteable(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_IterNew(IntPtr ao); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_IterSubscript(IntPtr iter, IntPtr indexes, int n); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_IterSubscriptAssign(IntPtr iter, IntPtr indexes, int n, IntPtr array_val); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_FillWithObject(IntPtr arr, IntPtr obj); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_FillWithScalar(IntPtr arr, IntPtr zero_d_array); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_FlatView(IntPtr arr); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //private static extern void npy_ufunc_dealloc(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_GetNumericOp(int op); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArray_SetNumericOp(int op, IntPtr ufunc); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //internal static extern IntPtr NpyArray_GenericUnaryFunction(IntPtr arr1, IntPtr ufunc, IntPtr ret); < < //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < //internal static extern IntPtr NpyArray_GenericBinaryFunction(IntPtr arr1, IntPtr arr2, IntPtr ufunc, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_All(IntPtr self, int axis, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Any(IntPtr self, int axis, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_ArgMax(IntPtr self, int axis, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_ArgSort(IntPtr arr, int axis, int sortkind); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_Bool(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_ScalarKind(int typenum, IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Byteswap(IntPtr arr, byte inplace); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern bool NpyArray_CanCastTo(IntPtr fromDtype, IntPtr toDtype); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CastToType(IntPtr array, IntPtr descr, int fortran); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CheckAxis(IntPtr arr, ref int axis, < int flags); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Choose(IntPtr array, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 2)]IntPtr[] mps, int n, IntPtr ret, int clipMode); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CompareStringArrays(IntPtr a1, IntPtr a2, < int op, int rstrip); < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Conjugate(IntPtr arr, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Correlate(IntPtr arr1, IntPtr arr2, int typenum, int mode); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Correlate2(IntPtr arr1, IntPtr arr2, int typenum, int mode); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CopyAndTranspose(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_CopyAnyInto(IntPtr dest, IntPtr src); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CumProd(IntPtr arr, int axis, int < rtype, IntPtr ret); < < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_CumSum(IntPtr arr, int axis, int < rtype, IntPtr ret); < < // Reentrant - does not need to be wrapped. < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DescrAllocNames")] < internal static extern IntPtr DescrAllocNames(int n); < < // Reentrant - does not need to be wrapped. < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DescrAllocFields")] < internal static extern IntPtr DescrAllocFields(); < < /// < /// Deallocates a subarray block. The pointer passed in is descr->subarray, not < /// a descriptor object itself. < /// < /// Subarray structure < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArray_DestroySubarray(IntPtr subarrayPtr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, < EntryPoint="npy_descr_find_object_flag")] < private static extern int NpyArray_DescrFindObjectFlag(IntPtr subarrayPtr); < < // Reentrant -- does not need to be wrapped. < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DateTimeInfoNew")] < internal static extern IntPtr DateTimeInfoNew(string units, int num, int den, int events); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_DescrNewByteorder(IntPtr descr, byte order); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Flatten(IntPtr arr, int order); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_GetField(IntPtr arr, IntPtr dtype, int offset); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_GetImag(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_GetReal(IntPtr arr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_InnerProduct(IntPtr arr, IntPtr arr2, int type); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_LexSort( < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] mps, int n, int axis); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_MatrixProduct(IntPtr arr, IntPtr arr2, int type); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Max(IntPtr arr, int axis, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Min(IntPtr arr, int axis, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_NewCopy(IntPtr arr, int order); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_NewView(IntPtr descr, int nd, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] strides, < IntPtr arr, IntPtr offset, int ensureArray); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_NonZero(IntPtr self, < [MarshalAs(UnmanagedType.LPArray,SizeConst=NpyDefs.NPY_MAXDIMS)] IntPtr[] index_arrays, < IntPtr obj); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Prod(IntPtr arr, int axis, int < rtype, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_PutMask(IntPtr arr, IntPtr values, IntPtr mask); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_PutTo(IntPtr arr, IntPtr values, IntPtr indices, int clipmode); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Ravel(IntPtr arr, int fortran); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Repeat(IntPtr arr, IntPtr repeats, int axis); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_SearchSorted(IntPtr op1, IntPtr op2, int side); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_Sort(IntPtr arr, int axis, int sortkind); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Squeeze(IntPtr self); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_Sum(IntPtr arr, int axis, int rtype, IntPtr ret); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_SwapAxes(IntPtr arr, int a1, int a2); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_TakeFrom(IntPtr self, IntPtr indices, int axis, IntPtr ret, int clipMode); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArray_TypestrConvert(int itemsize, int gentype); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArray_UpdateFlags(IntPtr arr, int flagmask); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_View(IntPtr arr, IntPtr descr, IntPtr subtype); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyDict_Destroy(IntPtr dict); < < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < internal delegate int del_PrepareOutputs(IntPtr ufunc, IntPtr arrays, IntPtr args); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static unsafe extern int NpyUFunc_GenericFunction(IntPtr func, int nargs, IntPtr* mps, < int ntypenums, [In][MarshalAs(UnmanagedType.LPArray)] int[] rtypenums, < int originalObjectWasArray, del_PrepareOutputs npy_prepare_outputs_func, IntPtr prepare_out_args); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyUFunc_GenericReduction(IntPtr ufunc, < IntPtr arr, IntPtr indices, IntPtr arrOut, int axis, IntPtr descr, < int operation); < < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < internal unsafe delegate void del_GetErrorState(int* bufsizep, int* maskp, IntPtr* objp); < < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < internal unsafe delegate void del_ErrorHandler(sbyte* name, int errormask, IntPtr errobj, int retstatus, int* first); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyUFunc_SetFpErrFuncs(del_GetErrorState errorState, del_ErrorHandler handler); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArray_FromString(string data, IntPtr len, IntPtr dtype, int num, string sep); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArray_FromString")] < private static extern IntPtr NpyArray_FromBytes(byte[] data, IntPtr len, IntPtr dtype, int num, string sep); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="npy_arraydescr_isnative")] < private static extern int DescrIsNative(IntPtr descr); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] < private static extern void npy_initlib(IntPtr functionDefs, IntPtr wrapperFuncs, < IntPtr error_set, IntPtr error_occured, IntPtr error_clear, < IntPtr cmp_priority, IntPtr incref, IntPtr decref, < IntPtr enable_thread, IntPtr disable_thread); < < [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint = "npy_add_sortfuncs")] < private static extern IntPtr NpyArray_InitSortModule(); < < #endregion < < #region NpyAccessLib functions < < internal static void ArraySetDescr(ndarray arr, dtype newDescr) { < lock (GlobalIterpLock) { NpyArrayAccess_ArraySetDescr(arr.Array, newDescr.Descr); } < } < < internal static long GetArrayStride(ndarray arr, int dims) { < lock (GlobalIterpLock) { < return NpyCoreApi.NpyArrayAccess_GetArrayStride(arr.Array, dims); < } < } < < internal static int BindIndex(ndarray arr, NpyIndexes indexes, NpyIndexes result) { < lock (GlobalIterpLock) { < return NpyCoreApi.NpyArrayAccess_BindIndex(arr.Array, indexes.Indexes, indexes.NumIndexes, result.Indexes); < } < } < < internal static int GetFieldOffset(dtype descr, string fieldName, out IntPtr descrPtr) { < lock (GlobalIterpLock) { < return NpyCoreApi.NpyArrayAccess_GetFieldOffset(descr.Descr, fieldName, out descrPtr); < } < } < < internal static void Resize(ndarray arr, IntPtr[] newshape, bool refcheck, NpyDefs.NPY_ORDER order) { < lock (GlobalIterpLock) { < if (NpyCoreApi.NpyArrayAccess_Resize(arr.Array, newshape.Length, newshape, (refcheck ? 1 : 0), (int)order) < 0) { < NpyCoreApi.CheckError(); < } < } < } < < internal static ndarray Transpose(ndarray arr, IntPtr[] permute) { < lock (GlobalIterpLock) { < return NpyCoreApi.DecrefToInterface( < NpyCoreApi.NpyArrayAccess_Transpose(arr.Array, (permute != null) ? permute.Length : 0, permute)); < } < } < < internal static void ClearUPDATEIFCOPY(ndarray arr) { < lock (GlobalIterpLock) { < NpyArrayAccess_ClearUPDATEIFCOPY(arr.Array); < } < } < < < internal static IntPtr IterNext(IntPtr corePtr) { < lock (GlobalIterpLock) { < return NpyArrayAccess_IterNext(corePtr); < } < } < < internal static void IterReset(IntPtr iter) { < lock (GlobalIterpLock) { < NpyArrayAccess_IterReset(iter); < } < } < < internal static IntPtr IterGoto1D(flatiter iter, IntPtr index) { < lock (GlobalIterpLock) { < return NpyArrayAccess_IterGoto1D(iter.Iter, index); < } < } < < internal static IntPtr IterCoords(flatiter iter) { < lock (GlobalIterpLock) { < return NpyArrayAccess_IterCoords(iter.Iter); < } < } < < internal static void DescrReplaceSubarray(dtype descr, dtype baseDescr, IntPtr[] dims) { < lock (GlobalIterpLock) { < NpyArrayAccess_DescrReplaceSubarray(descr.Descr, baseDescr.Descr, dims.Length, dims); < } < } < < internal static void DescrReplaceFields(dtype descr, IntPtr namesPtr, IntPtr fieldsDict) { < lock (GlobalIterpLock) { < NpyArrayAccess_DescrReplaceFields(descr.Descr, namesPtr, fieldsDict); < } < } < < internal static void ZeroFill(ndarray arr, IntPtr offset) { < lock (GlobalIterpLock) { < NpyArrayAccess_ZeroFill(arr.Array, offset); < } < } < < /// < /// Allocates a block of memory using NpyDataMem_NEW that is the same size as a single < /// array element and zeros the bytes. This is usually good enough, but is not a correct < /// zero for object arrays. The caller must free the memory with NpyDataMem_FREE(). < /// < /// Array to take the element size from < /// Pointer to zero'd memory < internal static IntPtr DupZeroElem(ndarray arr) { < lock (GlobalIterpLock) { < return NpyArrayAccess_DupZeroElem(arr.Array); < } < } < < internal unsafe static void CopySwapIn(ndarray arr, long offset, void* data, bool swap) { < lock (GlobalIterpLock) { < NpyArrayAccess_CopySwapIn(arr.Array, offset, data, swap ? 1 : 0); < } < } < < internal unsafe static void CopySwapOut(ndarray arr, long offset, void* data, bool swap) { < lock (GlobalIterpLock) { < NpyArrayAccess_CopySwapOut(arr.Array, offset, data, swap ? 1 : 0); < } < } < < internal unsafe static void CopySwapScalar(dtype dtype, void* dest, void* src, bool swap) { < lock (GlobalIterpLock) { < NpyArrayAccess_CopySwapScalar(dtype.Descr, dest, src, swap); < } < } < < internal static void SetNamesList(dtype descr, string[] nameslist) { < lock (GlobalIterpLock) { < NpyArrayAccess_SetNamesList(descr.Descr, nameslist, nameslist.Length); < } < } < < /// < /// Deallocates the core data structure. The obj IntRef is no longer valid after this < /// point and there must not be any existing internal core references to this object < /// either. < /// < /// Core NpyObject instance to deallocate < internal static void Dealloc(IntPtr obj) { < lock(GlobalIterpLock) { < NpyArrayAccess_Dealloc(obj); < } < } < < < /// < /// Constructs a native ufunc object from a Python function. The inputs define the < /// number of arguments taken, number of outputs, and function name. The pyLoopFunc < /// function implements the iteration over a given array and should always by PyUFunc_Om_On. < /// pyFunc is the actual function object to call. < /// < /// Number of input arguments < /// Number of result values (a PythonTuple if > 1) < /// Name of the function < /// PyUFunc_Om_On, implements looping over the array < /// Function to call < internal static IntPtr UFuncFromPyFunc(int nin, int nout, String funcName, < IntPtr pyWrapperFunc, IntPtr pyFunc) { < lock (GlobalIterpLock) { < return NpyUFuncAccess_UFuncFromPyFunc(nin, nout, funcName, pyWrapperFunc, pyFunc); < } < } < < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < internal static extern void NpyUFuncAccess_Init(IntPtr funcDict, < IntPtr funcDefs, IntPtr callMethodFunc, IntPtr addToDictFunc); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_ArraySetDescr(IntPtr array, IntPtr newDescr); < < /// < /// Increments the reference count of the core object. This routine is re-entrant and < /// locking is handled at the bottom layer. < /// < /// Pointer to the core object to increment reference count to < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint="NpyArrayAccess_Incref")] < internal static extern void Incref(IntPtr obj); < < /// < /// Decrements the reference count of the core object. This can trigger the release of < /// the reference to the managed wrapper and eventually trigger a garbage collection of < /// the object. If the core object does not have a managed wrapper, this can trigger the < /// immediate destruction of the core object. < /// < /// This function is re-entrant/thread-safe. < /// < /// Pointer to the core object < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint="NpyArrayAccess_Decref")] < internal static extern void Decref(IntPtr obj); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_GetNativeTypeInfo")] < private static extern byte GetNativeTypeInfo(out int intSize, < out int longsize, out int longLongSize, out int longDoubleSize); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_GetIntpArray")] < unsafe private static extern bool GetIntpArray(IntPtr srcPtr, int len, Int64 *dimMem); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_AllocArray(IntPtr descr, int nd, < [In][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=1)] long[] dims, bool fortran); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern long NpyArrayAccess_GetArrayStride(IntPtr arr, int dims); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_BindIndex(IntPtr arr, IntPtr indexes, int n, IntPtr bound_indexes); < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArray_DescrField < { < internal IntPtr descr; < internal int offset; < internal IntPtr title; < } < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_GetDescrField(IntPtr descr, < [In][MarshalAs(UnmanagedType.LPStr)]string name, out NpyArray_DescrField field); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_GetFieldOffset(IntPtr descr, [MarshalAs(UnmanagedType.LPStr)] string fieldName, out IntPtr out_descr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_MultiIterFromArrays([MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] arrays, int n); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_Newshape(IntPtr arr, int ndim, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, < int order); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_SetShape(IntPtr arr, int ndim, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_SetState(IntPtr arr, int ndim, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, int order, < // Note string is marshalled as LPWStr (16-bit unicode) to avoid making a copy of it < [MarshalAsAttribute(UnmanagedType.LPWStr)]string rawdata, int rawLength); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_Resize(IntPtr arr, int ndim, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] newshape, int resize, int fortran); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_Transpose(IntPtr arr, int ndim, < [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] permute); < < /// < /// Returns the current ABI version. Re-entrant, does not need locking. < /// < /// current version # < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_GetAbiVersion")] < internal static extern float GetAbiVersion(); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_ClearUPDATEIFCOPY(IntPtr arr); < < < /// < /// Deallocates an NpyObject. Thread-safe. < /// < /// The object to deallocate < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_Dealloc(IntPtr obj); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_IterNext")] < private static extern IntPtr NpyArrayAccess_IterNext(IntPtr iter); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_IterReset")] < private static extern void NpyArrayAccess_IterReset(IntPtr iter); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_IterGoto1D")] < private static extern IntPtr NpyArrayAccess_IterGoto1D(IntPtr iter, IntPtr index); < < // Re-entrant < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_IterArray")] < internal static extern IntPtr IterArray(IntPtr iter); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_IterCoords(IntPtr iter); < < // < // Offset functions - these return the offsets to fields in native structures < // as a workaround for not being able to include the C header file. < // < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_ArrayGetOffsets")] < private static extern void ArrayGetOffsets(out int magicNumOffset, < out int descrOffset, out int ndOffset, out int dimensionsOffset, < out int stridesOffset, out int flagsOffset, out int dataOffset, < out int baseObjOffset, out int baseArrayOffset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_DescrGetOffsets")] < private static extern void DescrGetOffsets(out int magicNumOffset, < out int kindOffset, out int typeOffset, out int byteorderOffset, < out int flagsOffset, out int typenumOffset, out int elsizeOffset, < out int alignmentOffset, out int namesOFfset, out int subarrayOffset, < out int fieldsOffset, out int dtinfoOffset, out int fieldsOffsetOffset, < out int fieldsDescrOffset, out int fieldsTitleOffset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, < EntryPoint = "NpyArrayAccess_IterGetOffsets")] < private static extern void IterGetOffsets(out int sizeOffset, out int indexOffset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_MultiIterGetOffsets")] < private static extern void MultiIterGetOffsets(out int numiterOffset, out int sizeOffset, < out int indexOffset, out int ndOffset, out int dimensionsOffset, out int itersOffset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl,EntryPoint = "NpyArrayAccess_UFuncGetOffsets")] < private static extern void UFuncGetOffsets(out int ninOffset, < out int noutOffset, out int nargsOffset, out int coreEnabledOffset, < out int identifyOffset, out int ntypesOffset, out int checkRetOffset, < out int nameOffset, out int typesOffset, out int coreSigOffset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl,EntryPoint = "NpyArrayAccess_GetIndexInfo")] < private static extern void GetIndexInfo(out int unionOffset, out int indexSize, out int maxDims); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl,EntryPoint = "NpyArrayAccess_NewFromDescrThunk")] < private static extern IntPtr NewFromDescrThunk(IntPtr descr, int nd, int flags, < [In][MarshalAs(UnmanagedType.LPArray)] long[] dims, < [In][MarshalAs(UnmanagedType.LPArray)] long[] strides, IntPtr data, IntPtr interfaceData); < < // Thread-safe. < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_DescrDestroyNames")] < internal static extern void DescrDestroyNames(IntPtr p, int n); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_AddField(IntPtr fields, IntPtr names, int i, < [MarshalAs(UnmanagedType.LPStr)]string name, IntPtr descr, int offset, < [MarshalAs(UnmanagedType.LPStr)]string title); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_DescrNewVoid(IntPtr fields, IntPtr names, int elsize, int flags, int alignment); < < /// < /// Allocates a new VOID descriptor and sets the subarray field as specified. < /// < /// Base descriptor for the subarray < /// Number of dimensions < /// Array of size of each dimension < /// New descriptor object < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_DescrNewSubarray(IntPtr baseDescr, < int ndim, [In][MarshalAs(UnmanagedType.LPArray)]IntPtr[] dims); < < /// < /// Replaces / sets the subarray field of an existing object. < /// < /// Descriptor object to be modified < /// Base descriptor for the subaray < /// Number of dimensions < /// Array of size of each dimension < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_DescrReplaceSubarray(IntPtr descr, IntPtr baseDescr, < int ndim, [In][MarshalAs(UnmanagedType.LPArray)]IntPtr[] dims); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_DescrReplaceFields(IntPtr descr, IntPtr namesArr, IntPtr fieldsDict); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_GetBytes(IntPtr arr, < [Out][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=2)] byte[] bytes, long len, int order); < < // Thread-safe < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < internal static extern IntPtr NpyArrayAccess_ToInterface(IntPtr arr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_ZeroFill(IntPtr arr, IntPtr offset); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_DupZeroElem(IntPtr arr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_Fill(IntPtr arr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static unsafe extern void NpyArrayAccess_CopySwapIn(IntPtr arr, long offset, void* data, int swap); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_ViewLike(IntPtr arr, IntPtr proto); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static unsafe extern void NpyArrayAccess_CopySwapOut(IntPtr arr, long offset, void* data, int swap); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static unsafe extern void NpyArrayAccess_CopySwapScalar(IntPtr dtype, void *dest, void* src, bool swap); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern int NpyArrayAccess_SetDateTimeInfo(IntPtr descr, < [MarshalAs(UnmanagedType.LPStr)]string units, int num, int den, int events); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_InheritDescriptor(IntPtr type, IntPtr conv); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_GetBufferFormatString(IntPtr arr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_Free(IntPtr ptr); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyArrayAccess_FromFile(string fileName, IntPtr dtype, int count, string sep); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern void NpyArrayAccess_SetNamesList(IntPtr dtype, string[] nameslist, int len); < < // Thread-safe < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_DictAllocIter")] < internal static extern IntPtr NpyDict_AllocIter(); < < // Thread-safe < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_DictFreeIter")] < internal static extern void NpyDict_FreeIter(IntPtr iter); < < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] < private static extern IntPtr NpyUFuncAccess_UFuncFromPyFunc(int nin, int nout, String funcName, IntPtr pyThunk, IntPtr func); < < /// < /// Accesses the next dictionary item, returning the key and value. Thread-safe when operating across < /// separate iterators; caller must ensure that one iterator is not access simultaneously from two < /// different threads. < /// < /// Pointer to the dictionary object < /// Iterator structure < /// Next key < /// Next value < /// True if an element was returned, false at the end of the sequence < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_DictNext")] < internal static extern bool NpyDict_Next(IntPtr dict, IntPtr iter, out IntPtr key, out IntPtr value); < < // Thread-safe < [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_FormatLongFloat")] < internal static extern string FormatLongFloat(double v, int precision); < < #endregion < < < #region Callbacks and native access < < /* This structure must match the NpyObject_HEAD structure in npy_object.h < * exactly as it is used to determine the platform-specific offsets. The < * offsets allow the C# code to access these fields directly. */ < [StructLayout(LayoutKind.Sequential)] < internal struct NpyObject_HEAD { < internal IntPtr nob_refcnt; < internal IntPtr nob_type; < internal IntPtr nob_interface; < } < < [StructLayout(LayoutKind.Sequential)] < struct NpyInterface_WrapperFuncs { < internal IntPtr array_new_wrapper; < internal IntPtr iter_new_wrapper; < internal IntPtr multi_iter_new_wrapper; < internal IntPtr neighbor_iter_new_wrapper; < internal IntPtr descr_new_from_type; < internal IntPtr descr_new_from_wrapper; < internal IntPtr ufunc_new_wrapper; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArrayOffsets { < internal int off_magic_number; < internal int off_descr; < internal int off_nd; < internal int off_dimensions; < internal int off_strides; < internal int off_flags; < internal int off_data; < internal int off_base_obj; < internal int off_base_array; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArrayDescrOffsets < { < internal int off_magic_number; < internal int off_kind; < internal int off_type; < internal int off_byteorder; < internal int off_flags; < internal int off_type_num; < internal int off_elsize; < internal int off_alignment; < internal int off_names; < internal int off_subarray; < internal int off_fields; < internal int off_dtinfo; < < /// < /// Offset to the 'offset' field of the NpyArray_DescrField structure. < /// < internal int off_fields_offset; < < /// < /// Offset to the 'descr' field of the NpyArray_DescrField structure. < /// < internal int off_fields_descr; < < /// < /// Offset to the 'title' field of the NpyArray_DescrField structure. < /// < internal int off_fields_title; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArrayIterOffsets < { < internal int off_size; < internal int off_index; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArrayMultiIterOffsets < { < internal int off_numiter; < internal int off_size; < internal int off_index; < internal int off_nd; < internal int off_dimensions; < internal int off_iters; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyArrayIndexInfo { < internal int off_union; < internal int sizeof_index; < internal int max_dims; < } < < [StructLayout(LayoutKind.Sequential)] < internal struct NpyUFuncOffsets < { < internal int off_nin; < internal int off_nout; < internal int off_nargs; < internal int off_identify; < internal int off_ntypes; < internal int off_check_return; < internal int off_name; < internal int off_types; < internal int off_core_signature; < internal int off_core_enabled; < } < < [StructLayout(LayoutKind.Sequential)] < internal class DateTimeInfo { < internal NpyDefs.NPY_DATETIMEUNIT @base; < internal int num; < internal int den; < internal int events; < } < < [StructLayout(LayoutKind.Sequential)] < internal unsafe struct NpyArray_ArrayDescr { < internal IntPtr @base; < internal IntPtr shape_num_dims; < internal IntPtr* shape_dims; < } < < internal static readonly NpyArrayOffsets ArrayOffsets; < internal static readonly NpyArrayDescrOffsets DescrOffsets; < internal static readonly NpyArrayIterOffsets IterOffsets; < internal static readonly NpyArrayMultiIterOffsets MultiIterOffsets; < internal static readonly NpyArrayIndexInfo IndexInfo; < internal static readonly NpyUFuncOffsets UFuncOffsets; < < internal static byte oppositeByteOrder; < < /// < /// Used for synchronizing modifications to interface pointer. < /// < private static object interfaceSyncRoot = new Object(); < < /// < /// Offset to the interface pointer. < /// < internal static int Offset_InterfacePtr = (int)Marshal.OffsetOf(typeof(NpyObject_HEAD), "nob_interface"); < < /// < /// Offset to the reference count in the header structure. < /// < internal static int Offset_RefCount = (int)Marshal.OffsetOf(typeof(NpyObject_HEAD), "nob_refcnt"); < < private static IntPtr lastArrayHandle = IntPtr.Zero; < < /// < /// Given a pointer to a core (native) object, returns the managed wrapper. < /// < /// Address of native object < /// Managed wrapper object < internal static TResult ToInterface(IntPtr ptr) { < if (ptr == IntPtr.Zero) { < return default(TResult); < } < < IntPtr wrapper = Marshal.ReadIntPtr(ptr, (int)Offset_InterfacePtr); < if (wrapper == IntPtr.Zero) { < // The wrapper object is dynamically created for some instances < // so this call into native land triggers that magic. < wrapper = NpyArrayAccess_ToInterface(ptr); < if (wrapper == IntPtr.Zero) { < throw new IronPython.Runtime.Exceptions.RuntimeException( < String.Format("Managed wrapper for type '{0}' is NULL.", typeof(TResult).Name)); < } < } < return (TResult)GCHandleFromIntPtr(wrapper).Target; < } < < < /// < /// Same as ToInterface but releases the core reference. < /// < /// Type of the expected object < /// Pointer to the core object < /// Wrapper instance corresponding to ptr < internal static TResult DecrefToInterface(IntPtr ptr) { < CheckError(); < if (ptr == IntPtr.Zero) { < return default(TResult); < } < TResult result = ToInterface(ptr); < Decref(ptr); < return result; < } < < < /// < /// Allocates a managed wrapper for the passed array object. < /// < /// Pointer to the native array object < /// If true forces base array type, not subtype < /// Not sure how this is used < /// Not used < /// void ** for us to store the allocated wrapper < /// True on success, false on failure < private static int ArrayNewWrapper(IntPtr coreArray, int ensureArray, < int customStrides, IntPtr subtypePtr, IntPtr interfaceData, < IntPtr interfaceRet) { < int success = 1; // Success < < try { < PythonType subtype = null; < object interfaceObj = null; < ndarray wrapArray = null; < < if (interfaceData != IntPtr.Zero) { < interfaceObj = GCHandleFromIntPtr(interfaceData, true).Target; < } < < if (interfaceObj is UseExistingWrapper) { < // Special case for UseExistingWrapper < UseExistingWrapper w = (UseExistingWrapper)interfaceObj; < wrapArray = (ndarray)w.Wrapper; < wrapArray.SetArray(coreArray); < subtype = DynamicHelpers.GetPythonType(wrapArray); < } else { < // Determine the subtype. null means ndarray < if (ensureArray == 0) { < if (subtypePtr != IntPtr.Zero) { < subtype = (PythonType)GCHandleFromIntPtr(subtypePtr).Target; < } else if (interfaceObj != null) { < subtype = DynamicHelpers.GetPythonType(interfaceObj); < } < } < // Create the wrapper < if (subtype != null) { < CodeContext cntx = NpyUtil_Python.DefaultContext; < wrapArray = (ndarray)ObjectOps.__new__(cntx, subtype); < wrapArray.SetArray(coreArray); < } else { < wrapArray = new ndarray(); < wrapArray.SetArray(coreArray); < } < } < < // Call __array_finalize__ for subtypes < if (subtype != null) { < CodeContext cntx = NpyUtil_Python.DefaultContext; < if (PythonOps.HasAttr(cntx, wrapArray, "__array_finalize__")) { < object func = PythonOps.ObjectGetAttribute(cntx, wrapArray, "__array_finalize__"); < if (func != null) { < if (customStrides != 0) { < UpdateFlags(wrapArray, NpyDefs.NPY_UPDATE_ALL); < } < // TODO: Check for a Capsule < PythonCalls.Call(cntx, func, interfaceObj); < } < } < } < < // Write the result < IntPtr ret = GCHandle.ToIntPtr(AllocGCHandle(wrapArray)); < lastArrayHandle = ret; < Marshal.WriteIntPtr(interfaceRet, ret); < ndarray.IncreaseMemoryPressure(wrapArray); < < // TODO: Skipping subtype-specific initialization (ctors.c:718) < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating array wrapper."); < success = 0; < } catch (Exception e) { < Console.WriteLine("Exception while allocating array wrapper: {0}", e.Message); < success = 0; < } < return success; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_ArrayNewWrapper(IntPtr coreArray, int ensureArray, < int customStrides, IntPtr subtypePtr, IntPtr interfaceData, < IntPtr interfaceRet); < < < /// < /// Constructs a new managed wrapper for an interator object. This function < /// is thread-safe. < /// < /// Pointer to the native instance < /// Location to store GCHandle to the wrapper < /// 1 on success, 0 on error < private static int IterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet) { < int success = 1; < < try { < lock (interfaceSyncRoot) { < // Check interfaceRet inside the lock because some interface < // wrappers are dynamically created and two threads could < // trigger these event at the same time. < if (interfaceRet == IntPtr.Zero) { < flatiter wrapIter = new flatiter(coreIter); < interfaceRet = GCHandle.ToIntPtr(AllocGCHandle(wrapIter)); < } < } < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating iterator wrapper."); < success = 0; < } catch (Exception) { < Console.WriteLine("Exception while allocating iterator wrapper."); < success = 0; < } < return success; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_IterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet); < < < < /// < /// Constructs a new managed wrapper for a multi-iterator. This funtion < /// is thread safe. < /// < /// Pointer to the native instance < /// Location to store the wrapper handle < /// < private static int MultiIterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet) { < int success = 1; < try { < lock (interfaceSyncRoot) { < // Check interfaceRet inside the lock because some interface < // wrappers are dynamically created and two threads could < // trigger these event at the same time. < if (interfaceRet == IntPtr.Zero) { < broadcast wrapIter = broadcast.BeingCreated; < interfaceRet = GCHandle.ToIntPtr(AllocGCHandle(wrapIter)); < } < } < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating iterator wrapper."); < success = 0; < } catch (Exception) { < Console.WriteLine("Exception while allocating iterator wrapper."); < success = 0; < } < return success; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_MultiIterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet); < < < /// < /// Allocated a managed wrapper for one of the core, native types < /// < /// Type code (not used) < /// Pointer to the native descriptor object < /// void** for returning allocated wrapper < /// 1 on success, 0 on error < private static int DescrNewFromType(int type, IntPtr descr, IntPtr interfaceRet) { < int success = 1; < < try { < // TODO: Descriptor typeobj not handled. Do we need to? < < dtype wrap = new dtype(descr, type); < Marshal.WriteIntPtr(interfaceRet, < GCHandle.ToIntPtr(AllocGCHandle(wrap))); < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating descriptor wrapper."); < success = 0; < } catch (Exception) { < Console.WriteLine("Exception while allocating descriptor wrapper."); < success = 0; < } < return success; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_DescrNewFromType(int type, IntPtr descr, IntPtr interfaceRet); < < < < < /// < /// Allocated a managed wrapper for a user defined type < /// < /// Pointer to the base descriptor (not used) < /// Pointer to the native descriptor object < /// void** for returning allocated wrapper < /// 1 on success, 0 on error < private static int DescrNewFromWrapper(IntPtr baseTmp, IntPtr descr, IntPtr interfaceRet) { < int success = 1; < < try { < // TODO: Descriptor typeobj not handled. Do we need to? < < dtype wrap = new dtype(descr); < Marshal.WriteIntPtr(interfaceRet, < GCHandle.ToIntPtr(AllocGCHandle(wrap))); < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating descriptor wrapper."); < success = 0; < } catch (Exception) { < Console.WriteLine("Exception while allocating descriptor wrapper."); < success = 0; < } < return success; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_DescrNewFromWrapper(IntPtr baseTmp, IntPtr descr, IntPtr interfaceRet); < < < < /// < /// Allocated a managed wrapper for a UFunc object. < /// < /// Pointer to the base object < /// void** for returning allocated wrapper < /// 1 on success, 0 on error < private static void UFuncNewWrapper(IntPtr basePtr, IntPtr interfaceRet) { < try { < ufunc wrap = new ufunc(basePtr); < Marshal.WriteIntPtr(interfaceRet, < GCHandle.ToIntPtr(AllocGCHandle(wrap))); < } catch (InsufficientMemoryException) { < Console.WriteLine("Insufficient memory while allocating ufunc wrapper."); < } catch (Exception) { < Console.WriteLine("Exception while allocating ufunc wrapper."); < } < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate void del_UFuncNewWrapper(IntPtr basePtr, IntPtr interfaceRet); < < < /// < /// Accepts a pointer to an existing GCHandle object and allocates < /// an additional GCHandle to the same object. This effectively < /// does an "incref" on the object. Used in cases where an array < /// of objects is being copied. < /// < /// Usually wrapPtr is NULL meaning that we just allocate a new < /// handle and return it. If wrapPtr != NULL then we assign the < /// new handle to it as well. Must be done atomically. < /// < /// Pointer to GCHandle of object to reference < /// Address of the nob_interface field (not value of it) < /// New handle to the input object < private static IntPtr IncrefCallback(IntPtr ptr, IntPtr nobInterfacePtr) { < if (ptr == IntPtr.Zero) { < return IntPtr.Zero; < } < < IntPtr newWrapRef = IntPtr.Zero; < lock (interfaceSyncRoot) { < GCHandle oldWrapRef = GCHandleFromIntPtr(ptr, true); < object wrapperObj = oldWrapRef.Target; < newWrapRef = GCHandle.ToIntPtr(AllocGCHandle(wrapperObj)); < if (nobInterfacePtr != IntPtr.Zero) { < // Replace the contents of nobInterfacePtr with the new reference. < Marshal.WriteIntPtr(nobInterfacePtr, newWrapRef); < FreeGCHandle(oldWrapRef); < } < } < return newWrapRef; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate IntPtr del_Incref(IntPtr ptr, IntPtr wrapPtr); < < /// < /// Releases the reference to the given interface object. Note that < /// this is not a decref but actual freeingo of this handle, it can < /// not be used again. < /// < /// Interface object to 'decref' < private static void DecrefCallback(IntPtr ptr, IntPtr nobInterfacePtr) { < lock (interfaceSyncRoot) { < if (nobInterfacePtr != IntPtr.Zero) { < // Deferencing the interface wrapper. We can't just null the < // wrapPtr because we have to have maintain the link so we < // allocate a weak reference instead. < GCHandle oldWrapRef = GCHandleFromIntPtr(ptr); < Object wrapperObj = oldWrapRef.Target; < Marshal.WriteIntPtr(nobInterfacePtr, < GCHandle.ToIntPtr(AllocGCHandle(wrapperObj, GCHandleType.Weak))); < FreeGCHandle(oldWrapRef); < } else { < if (ptr != IntPtr.Zero) { < FreeGCHandle(GCHandleFromIntPtr(ptr)); < } < } < } < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate void del_Decref(IntPtr ptr, IntPtr wrapPtr); < < < internal static IntPtr GetRefcnt(IntPtr obj) { < // NOTE: I'm relying on the refcnt being first. < return Marshal.ReadIntPtr(obj); < } < < < < #region Error handling < < /// < /// Error type, determines which type of exception to throw. < /// DANGER! Must be kept in sync with npy_api.h < /// < private enum NpyExc_Type { < MemoryError = 0, < IOError, < ValueError, < TypeError, < IndexError, < RuntimeError, < AttributeError, < ComplexWarning, < NotImplementedError, < FloatingPointError, < NoError < } < < < /// < /// Indicates the most recent error code or NpyExc_NoError if nothing pending < /// < [ThreadStatic] < private static NpyExc_Type ErrorCode = NpyExc_Type.NoError; < < /// < /// Stores the most recent error message per-thread < /// < [ThreadStatic] < private static string ErrorMessage = null; < < public static void CheckError() { < if (ErrorCode != NpyExc_Type.NoError) { < NpyExc_Type errTmp = ErrorCode; < String msgTmp = ErrorMessage; < < ErrorCode = NpyExc_Type.NoError; < ErrorMessage = null; < < switch (errTmp) { < case NpyExc_Type.MemoryError: < throw new InsufficientMemoryException(msgTmp); < case NpyExc_Type.IOError: < throw new System.IO.IOException(msgTmp); < case NpyExc_Type.ValueError: < throw new ArgumentException(msgTmp); < case NpyExc_Type.IndexError: < throw new IndexOutOfRangeException(msgTmp); < case NpyExc_Type.RuntimeError: < throw new IronPython.Runtime.Exceptions.RuntimeException(msgTmp); < case NpyExc_Type.AttributeError: < throw new MissingMemberException(msgTmp); < case NpyExc_Type.ComplexWarning: < PythonOps.Warn(NpyUtil_Python.DefaultContext, ComplexWarning, msgTmp); < break; < case NpyExc_Type.TypeError: < throw new IronPython.Runtime.Exceptions.TypeErrorException(msgTmp); < case NpyExc_Type.NotImplementedError: < throw new NotImplementedException(msgTmp); < case NpyExc_Type.FloatingPointError: < throw new IronPython.Runtime.Exceptions.FloatingPointException(msgTmp); < default: < Console.WriteLine("Unhandled exception type {0} in CheckError.", errTmp); < throw new IronPython.Runtime.Exceptions.RuntimeException(msgTmp); < } < } < } < < private static PythonType complexWarning; < < internal static PythonType ComplexWarning { < get { < if (complexWarning == null) { < CodeContext cntx = NpyUtil_Python.DefaultContext; < PythonModule core = (PythonModule)PythonOps.ImportBottom(cntx, "numpy.core", 0); < object tmp; < if (PythonOps.ModuleTryGetMember(cntx, core, "ComplexWarning", out tmp)) { < complexWarning = (PythonType)tmp; < } < } < return complexWarning; < } < } < < private static void SetError(NpyExc_Type exceptType, string msg) { < if (exceptType == NpyExc_Type.ComplexWarning) { < Console.WriteLine("Warning: {0}", msg); < } else { < ErrorCode = exceptType; < ErrorMessage = msg; < } < } < < < /// < /// Called by NpyErr_SetMessage in the native world when something bad happens < /// < /// Type of exception to be thrown < /// Message string < unsafe private static void SetErrorCallback(int exceptType, sbyte* bStr) { < if (exceptType < 0 || exceptType >= (int)NpyExc_Type.NoError) { < Console.WriteLine("Internal error: invalid exception type {0}, likely ErrorType and npyexc_type (npy_api.h) are out of sync.", < exceptType); < } < SetError((NpyExc_Type)exceptType, new string(bStr)); < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < unsafe public delegate void del_SetErrorCallback(int exceptType, sbyte* msg); < < < /// < /// Called by native side to check to see if an error occurred < /// < /// 1 if an error is pending, 0 if not < private static int ErrorOccurredCallback() { < return (ErrorCode != NpyExc_Type.NoError) ? 1 : 0; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate int del_ErrorOccurredCallback(); < < < private static void ClearErrorCallback() { < ErrorCode = NpyExc_Type.NoError; < ErrorMessage = null; < } < [UnmanagedFunctionPointer(CallingConvention.Cdecl)] < public delegate void del_ClearErrorCallback(); < < private static unsafe void GetErrorState(int* bufsizep, int* errmaskp, IntPtr* errobjp) { < // deref any existing obj < if (*errobjp != IntPtr.Zero) { < FreeGCHandle(GCHandleFromIntPtr(*errobjp)); < *errobjp = IntPtr.Zero; < } < var info = umath.errorInfo; < if (info == null) { < *bufsizep = NpyDefs.NPY_BUFSIZE; < *errmaskp = NpyDefs.NPY_UFUNC_ERR_DEFAULT; < *errobjp = IntPtr.Zero; < } else { < umath.ErrorInfo vInfo = (umath.ErrorInfo)info; < *bufsizep = vInfo.bufsize; < *errmaskp = vInfo.errmask; < if (vInfo.errobj != null) { < GCHandle h = AllocGCHandle(vInfo.errobj); < *errobjp = GCHandle.ToIntPtr(h); < } < } < } < < private static unsafe void ErrorHandler(sbyte* name, int errormask, IntPtr errobj, int retstatus, int* first) { < try { < object obj; < if (errobj != IntPtr.Zero) { < obj = GCHandleFromIntPtr(errobj).Target; < } else { < obj = null; < } < string sName = new string(name); < NpyDefs.NPY_UFUNC_ERR method; < if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.DIVIDEBYZERO) != 0) { < bool bfirst = (*first != 0); < int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.DIVIDEBYZERO); < method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.DIVIDEBYZERO); < umath.ErrorHandler(sName, method, obj, "divide by zero", retstatus, ref bfirst); < *first = bfirst ? 1 : 0; < } < if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.OVERFLOW) != 0) { < bool bfirst = (*first != 0); < int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.OVERFLOW); < method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.OVERFLOW); < umath.ErrorHandler(sName, method, obj, "overflow", retstatus, ref bfirst); < *first = bfirst ? 1 : 0; < } < if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.UNDERFLOW) != 0) { < bool bfirst = (*first != 0); < int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.UNDERFLOW); < method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.UNDERFLOW); < umath.ErrorHandler(sName, method, obj, "underflow", retstatus, ref bfirst); < *first = bfirst ? 1 : 0; < } < if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.INVALID) != 0) { < bool bfirst = (*first != 0); < int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.INVALID); < method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.INVALID); < umath.ErrorHandler(sName, method, obj, "invalid", retstatus, ref bfirst); < *first = bfirst ? 1 : 0; < } < } catch (Exception ex) { < SetError(NpyExc_Type.FloatingPointError, ex.Message); < } < } < < #endregion < < #region Thread handling < // CPython uses a threading model that is single threaded unless the global interpreter lock < // is explicitly released. While .NET supports true threading, the ndarray core has not been < // completely checked to makes sure that it is re-entrant much less modify each function to < // perform fine-grained locking on individual objects. Thus we artificially lock IronPython < // down and force ndarray accesses to be single threaded for now. < < /// < /// Equivalent to the CPython GIL. < /// < private static readonly object GlobalIterpLock = new object(); < < /// < /// Releases the GIL so other threads can run. < /// < /// Return value is unused < private static IntPtr EnableThreads() { < Monitor.Exit(GlobalIterpLock); < return IntPtr.Zero; < } < private delegate IntPtr del_EnableThreads(); < < /// < /// Re-acquires the GIL forcing code to stop until other threads have exited the ndarray core. < /// < /// Unused < private static void DisableThreads(IntPtr unused) { < Monitor.Enter(GlobalIterpLock); < } < private delegate void del_DisableThreads(IntPtr unused); < < #endregion < < // < // These variables hold a reference to the delegates passed into the core. < // Failure to hold these references causes the callback function to disappear < // at some point when the GC runs. < // < private static readonly NpyInterface_WrapperFuncs wrapFuncs; < < private static readonly del_ArrayNewWrapper ArrayNewWrapDelegate = < new del_ArrayNewWrapper(ArrayNewWrapper); < private static readonly del_IterNewWrapper IterNewWrapperDelegate = < new del_IterNewWrapper(IterNewWrapper); < private static readonly del_MultiIterNewWrapper MultiIterNewWrapperDelegate = < new del_MultiIterNewWrapper(MultiIterNewWrapper); < private static readonly del_DescrNewFromType DescrNewFromTypeDelegate = < new del_DescrNewFromType(DescrNewFromType); < private static readonly del_DescrNewFromWrapper DescrNewFromWrapperDelegate = < new del_DescrNewFromWrapper(DescrNewFromWrapper); < private static readonly del_UFuncNewWrapper UFuncNewWrapperDelegate = < new del_UFuncNewWrapper(UFuncNewWrapper); < < private static readonly del_Incref IncrefCallbackDelegate = < new del_Incref(IncrefCallback); < private static readonly del_Decref DecrefCallbackDelegate = < new del_Decref(DecrefCallback); < unsafe private static readonly del_SetErrorCallback SetErrorCallbackDelegate = < new del_SetErrorCallback(SetErrorCallback); < private static readonly del_ErrorOccurredCallback ErrorOccurredCallbackDelegate = < new del_ErrorOccurredCallback(ErrorOccurredCallback); < private static readonly del_ClearErrorCallback ClearErrorCallbackDelegate = < new del_ClearErrorCallback(ClearErrorCallback); < private static readonly del_EnableThreads EnableThreadsDelegate = < new del_EnableThreads(EnableThreads); < private static readonly del_DisableThreads DisableThreadsDelegate = < new del_DisableThreads(DisableThreads); < < private static unsafe readonly del_GetErrorState GetErrorStateDelegate = new del_GetErrorState(GetErrorState); < private static unsafe readonly del_ErrorHandler ErrorHandlerDelegate = new del_ErrorHandler(ErrorHandler); < < < /// < /// The native type code that matches up to a 32-bit int. < /// < internal static readonly NpyDefs.NPY_TYPES TypeOf_Int32; < < /// < /// Native type code that matches up to a 64-bit int. < /// < internal static readonly NpyDefs.NPY_TYPES TypeOf_Int64; < < /// < /// Native type code that matches up to a 32-bit unsigned int. < /// < internal static readonly NpyDefs.NPY_TYPES TypeOf_UInt32; < < /// < /// Native type code that matches up to a 64-bit unsigned int. < /// < internal static readonly NpyDefs.NPY_TYPES TypeOf_UInt64; < < /// < /// Size of element in integer arrays, in bytes. < /// < internal static readonly int Native_SizeOfInt; < < /// < /// Size of element in long arrays, in bytes. < /// < internal static readonly int Native_SizeOfLong; < < /// < /// Size of element in long long arrays, in bytes. < /// < internal static readonly int Native_SizeOfLongLong; < < /// < /// Size fo element in long double arrays, in bytes. < /// < internal static readonly int Native_SizeOfLongDouble; < < < /// < /// Initializes the core library with necessary callbacks on load. < /// < static NpyCoreApi() < { < try { < // Check the native byte ordering (make sure it matches what .NET uses) and < // figure out the mapping between types that vary in size in the core and < // fixed-size .NET types. < int intSize, longSize, longLongSize, longDoubleSize; < oppositeByteOrder = GetNativeTypeInfo(out intSize, out longSize, out longLongSize, < out longDoubleSize); < < Native_SizeOfInt = intSize; < Native_SizeOfLong = longSize; < Native_SizeOfLongLong = longLongSize; < Native_SizeOfLongDouble = longDoubleSize; < < // Important: keep this consistent with NpyArray_TypestrConvert in npy_conversion_utils.c < if (intSize == 4 && longSize == 4 && longLongSize == 8) { < TypeOf_Int32 = NpyDefs.NPY_TYPES.NPY_LONG; < TypeOf_Int64 = NpyDefs.NPY_TYPES.NPY_LONGLONG; < TypeOf_UInt32 = NpyDefs.NPY_TYPES.NPY_ULONG; < TypeOf_UInt64 = NpyDefs.NPY_TYPES.NPY_ULONGLONG; < } else if (intSize == 4 && longSize == 8 && longLongSize == 8) { < TypeOf_Int32 = NpyDefs.NPY_TYPES.NPY_INT; < TypeOf_Int64 = NpyDefs.NPY_TYPES.NPY_LONG; < TypeOf_UInt32 = NpyDefs.NPY_TYPES.NPY_UINT; < TypeOf_UInt64 = NpyDefs.NPY_TYPES.NPY_ULONG; < } else { < throw new NotImplementedException( < String.Format("Unimplemented combination of native type sizes: int = {0}b, long = {1}b, longlong = {2}b", < intSize, longSize, longLongSize)); < } < < < wrapFuncs = new NpyInterface_WrapperFuncs(); < < wrapFuncs.array_new_wrapper = < Marshal.GetFunctionPointerForDelegate(ArrayNewWrapDelegate); < wrapFuncs.iter_new_wrapper = < Marshal.GetFunctionPointerForDelegate(IterNewWrapperDelegate); < wrapFuncs.multi_iter_new_wrapper = < Marshal.GetFunctionPointerForDelegate(MultiIterNewWrapperDelegate); < wrapFuncs.neighbor_iter_new_wrapper = IntPtr.Zero; < wrapFuncs.descr_new_from_type = < Marshal.GetFunctionPointerForDelegate(DescrNewFromTypeDelegate); < wrapFuncs.descr_new_from_wrapper = < Marshal.GetFunctionPointerForDelegate(DescrNewFromWrapperDelegate); < wrapFuncs.ufunc_new_wrapper = < Marshal.GetFunctionPointerForDelegate(UFuncNewWrapperDelegate); < < int s = Marshal.SizeOf(wrapFuncs.descr_new_from_type); < < NumericOps.NpyArray_FunctionDefs funcDefs = NumericOps.GetFunctionDefs(); < IntPtr funcDefsHandle = IntPtr.Zero; < IntPtr wrapHandle = IntPtr.Zero; < try { < funcDefsHandle = Marshal.AllocHGlobal(Marshal.SizeOf(funcDefs)); < Marshal.StructureToPtr(funcDefs, funcDefsHandle, true); < wrapHandle = Marshal.AllocHGlobal(Marshal.SizeOf(wrapFuncs)); < Marshal.StructureToPtr(wrapFuncs, wrapHandle, true); < < npy_initlib(funcDefsHandle, wrapHandle, < Marshal.GetFunctionPointerForDelegate(SetErrorCallbackDelegate), < Marshal.GetFunctionPointerForDelegate(ErrorOccurredCallbackDelegate), < Marshal.GetFunctionPointerForDelegate(ClearErrorCallbackDelegate), < Marshal.GetFunctionPointerForDelegate(NumericOps.ComparePriorityDelegate), < Marshal.GetFunctionPointerForDelegate(IncrefCallbackDelegate), < Marshal.GetFunctionPointerForDelegate(DecrefCallbackDelegate), < IntPtr.Zero, IntPtr.Zero); < // for now we run full threaded, no safety net. < //Marshal.GetFunctionPointerForDelegate(EnableThreadsDelegate), < //Marshal.GetFunctionPointerForDelegate(DisableThreadsDelegate)); < } catch (Exception e) { < Console.WriteLine("Failed during initialization: {0}", e); < } finally { < Marshal.FreeHGlobal(funcDefsHandle); < Marshal.FreeHGlobal(wrapHandle); < } < < // Initialize the offsets to each structure type for fast access < // TODO: Not a great way to do this, but for now it's < // a convenient way to get hard field offsets from the core. < ArrayGetOffsets(out ArrayOffsets.off_magic_number, < out ArrayOffsets.off_descr, < out ArrayOffsets.off_nd, < out ArrayOffsets.off_dimensions, < out ArrayOffsets.off_strides, < out ArrayOffsets.off_flags, < out ArrayOffsets.off_data, < out ArrayOffsets.off_base_obj, < out ArrayOffsets.off_base_array); < < DescrGetOffsets(out DescrOffsets.off_magic_number, < out DescrOffsets.off_kind, < out DescrOffsets.off_type, < out DescrOffsets.off_byteorder, < out DescrOffsets.off_flags, < out DescrOffsets.off_type_num, < out DescrOffsets.off_elsize, < out DescrOffsets.off_alignment, < out DescrOffsets.off_names, < out DescrOffsets.off_subarray, < out DescrOffsets.off_fields, < out DescrOffsets.off_dtinfo, < out DescrOffsets.off_fields_offset, < out DescrOffsets.off_fields_descr, < out DescrOffsets.off_fields_title); < < IterGetOffsets(out IterOffsets.off_size, < out IterOffsets.off_index); < < MultiIterGetOffsets(out MultiIterOffsets.off_numiter, < out MultiIterOffsets.off_size, < out MultiIterOffsets.off_index, < out MultiIterOffsets.off_nd, < out MultiIterOffsets.off_dimensions, < out MultiIterOffsets.off_iters); < < GetIndexInfo(out IndexInfo.off_union, out IndexInfo.sizeof_index, out IndexInfo.max_dims); < < UFuncGetOffsets(out UFuncOffsets.off_nin, out UFuncOffsets.off_nout, < out UFuncOffsets.off_nargs, out UFuncOffsets.off_core_enabled, < out UFuncOffsets.off_identify, out UFuncOffsets.off_ntypes, < out UFuncOffsets.off_check_return, out UFuncOffsets.off_name, < out UFuncOffsets.off_types, out UFuncOffsets.off_core_signature); < < NpyUFunc_SetFpErrFuncs(GetErrorStateDelegate, ErrorHandlerDelegate); < < // Causes the sort functions to be registered with the type descriptor objects. < NpyArray_InitSortModule(); < } catch (Exception e) { < // Report any details that we can here because IronPython only reports < // that the static type initializer failed. < Console.WriteLine("Failed while initializing NpyCoreApi: {0}:{1}", e.GetType().Name, e.Message); < Console.WriteLine("NumpyDotNet stack trace:\n{0}", e.StackTrace); < throw e; < } < } < #endregion < < < #region Memory verification < < // Turns on/off verification of native memory handles. This functionality adds substantial runtime < // overhead but can be invaluable in tracking down accesses of freed pointers and other faults. < #if DEBUG < private const bool CheckMemoryAccesses = true; < #else < private const bool CheckMemoryAccesses = false; < #endif < < /// < /// Set of all currently allocated GCHandles and the type of handle. < /// < private static readonly Dictionary AllocatedHandles = new Dictionary(); < < /// < /// Set of freed GC handles that we should not be accessing. < /// < private static readonly HashSet FreedHandles = new HashSet(); < < /// < /// Allocates a GCHandle for a given object. If CheckMemoryAccesses is false, < /// this is inlined into the normal GCHandle call. If not, it performs the < /// access checking. < /// < /// Object to get a handle to < /// Handle type, default is normal < /// GCHandle instance < internal static GCHandle AllocGCHandle(Object o, GCHandleType type=GCHandleType.Normal) { < GCHandle h = GCHandle.Alloc(o, type); < if (CheckMemoryAccesses) { < lock (AllocatedHandles) { < IntPtr p = GCHandle.ToIntPtr(h); < if (AllocatedHandles.ContainsKey(p)) { < throw new AccessViolationException( < String.Format("Internal error: detected duplicate allocation of GCHandle. Probably a bookkeeping error. Handle is {0}.", < p)); < } < if (FreedHandles.Contains(p)) { < FreedHandles.Remove(p); < } < AllocatedHandles.Add(p, type); < } < } < return h; < } < < /// < /// Verifies that a GCHandle is known and good prior to using it. If < /// CheckMemoryAccesses is false, this is a no-op and goes away. < /// < /// Handle to verify < internal static GCHandle GCHandleFromIntPtr(IntPtr p, bool weakOk=false) { < if (CheckMemoryAccesses) { < lock (AllocatedHandles) { < GCHandleType handleType; < if (FreedHandles.Contains(p)) { < throw new AccessViolationException( < String.Format("Internal error: accessing already freed GCHandle {0}.", p)); < } < if (!AllocatedHandles.TryGetValue(p, out handleType)) { < throw new AccessViolationException( < String.Format("Internal error: attempt to access unknown GCHandle {0}.", p)); < } // else if (handleType == GCHandleType.Weak && !weakOk) { < // throw new AccessViolationException( < // String.Format("Internal error: invalid attempt to access weak reference {0}.", p)); < //} < } < } < return GCHandle.FromIntPtr(p); < } < < /// < /// Releases a GCHandle instance for an object. If CheckMemoryAccesses is < /// false this is inlined to the GCHandle.Free() method. Otherwise it verifies < /// that the handle is legit. < /// < /// GCHandle to release < internal static void FreeGCHandle(GCHandle h) { < if (CheckMemoryAccesses) { < lock (AllocatedHandles) { < IntPtr p = GCHandle.ToIntPtr(h); < if (FreedHandles.Contains(p)) { < throw new AccessViolationException( < String.Format("Internal error: freeing already freed GCHandle {0}.", p)); < } < if (!AllocatedHandles.ContainsKey(p)) { < throw new AccessViolationException( < String.Format("Internal error: freeing unknown GCHandle {0}.", p)); < } < AllocatedHandles.Remove(p); < FreedHandles.Add(p); < } < } < h.Free(); < } < < #endregion < } < } --- > using System; > using System.Collections.Generic; > using System.Diagnostics; > using System.Linq; > using System.Security; > using System.Text; > using System.Runtime.InteropServices; > using System.Runtime.CompilerServices; > using System.Threading; > using IronPython.Runtime; > using IronPython.Runtime.Types; > using IronPython.Runtime.Operations; > using IronPython.Modules; > using Microsoft.Scripting.Runtime; > using Microsoft.Scripting.Utils; > > namespace NumpyDotNet { > /// > /// NpyCoreApi class wraps the interactions with the libndarray core library. It > /// also makes use of NpyAccessLib.dll for a few functions that must be > /// implemented in native code. > /// > /// TODO: This class is going to get very large. Not sure if it's better to > /// try to break it up or just use partial classes and split it across > /// multiple files. > /// > [SuppressUnmanagedCodeSecurity] > public static class NpyCoreApi { > > /// > /// Stupid hack to allow us to pass an already-allocated wrapper instance > /// through the interfaceData argument and tell the wrapper creation functions > /// like ArrayNewWrapper to use an existing instance instead of creating a new > /// one. This is necessary because CPython does construction as an allocator > /// but .NET only triggers code after allocation. > /// > internal struct UseExistingWrapper > { > internal object Wrapper; > } > > #region API Wrappers > > /// > /// Returns a new descriptor object for internal types or user defined > /// types. > /// > internal static dtype DescrFromType(NpyDefs.NPY_TYPES type) { > // NOTE: No GIL wrapping here, function is re-entrant and includes locking. > IntPtr descr = NpyArray_DescrFromType((int)type); > CheckError(); > return DecrefToInterface(descr); > } > > internal static bool IsAligned(ndarray arr) { > lock (GlobalIterpLock) { > return Npy_IsAligned(arr.Array) != 0; > } > } > > internal static bool IsWriteable(ndarray arr) { > lock (GlobalIterpLock) { > return Npy_IsWriteable(arr.Array) != 0; > } > } > > internal static byte OppositeByteOrder { > get { return oppositeByteOrder; } > } > > internal static byte NativeByteOrder { > get { return (oppositeByteOrder == '<') ? (byte)'>' : (byte)'<'; } > } > > internal static dtype SmallType(dtype t1, dtype t2) { > lock (GlobalIterpLock) { > return ToInterface( > NpyArray_SmallType(t1.Descr, t2.Descr)); > } > } > > > /// > /// Moves the contents of src into dest. Arrays are assumed to have the > /// same number of elements, but can be different sizes and different types. > /// > /// Destination array > /// Source array > internal static void MoveInto(ndarray dest, ndarray src) { > lock (GlobalIterpLock) { > if (NpyArray_MoveInto(dest.Array, src.Array) == -1) { > CheckError(); > } > } > } > > > /// > /// Allocates a new array and returns the ndarray wrapper > /// > /// Type descriptor > /// Num of dimensions > /// Size of each dimension > /// True if Fortran layout, false for C layout > /// Newly allocated array > internal static ndarray AllocArray(dtype descr, int numdim, long[] dimensions, > bool fortran) { > IntPtr nativeDims = IntPtr.Zero; > > Incref(descr.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArrayAccess_AllocArray(descr.Descr, numdim, dimensions, fortran)); > } > } > > /// > /// Constructs a new array from an input array and descriptor type. The > /// Underlying array may or may not be copied depending on the requirements. > /// > /// Source array > /// Desired type > /// New array flags > /// New array (may be source array) > internal static ndarray FromArray(ndarray src, dtype descr, int flags) { > if (descr == null && flags == 0) return src; > if (descr == null) descr = src.Dtype; > if (descr != null) NpyCoreApi.Incref(descr.Descr); > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_FromArray(src.Array, descr.Descr, flags)); > } > } > > > /// > /// Returns an array with the size or stride of each dimension in the given array. > /// > /// The array > /// True returns size of each dimension, false returns stride of each dimension > /// Array w/ an array size or stride for each dimension > internal static Int64[] GetArrayDimsOrStrides(ndarray arr, bool getDims) { > Int64[] retArr; > > IntPtr srcPtr = Marshal.ReadIntPtr(arr.Array, getDims ? ArrayOffsets.off_dimensions : ArrayOffsets.off_strides); > retArr = new Int64[arr.ndim]; > unsafe { > fixed (Int64* dimMem = retArr) { > lock (GlobalIterpLock) { > if (!GetIntpArray(srcPtr, arr.ndim, dimMem)) { > throw new IronPython.Runtime.Exceptions.RuntimeException("Error getting array dimensions."); > } > } > } > } > return retArr; > } > > internal static Int64[] GetArrayDims(broadcast iter, bool getDims) { > Int64[] retArr; > > // off_dimensions is to start of array, not pointer to array! > IntPtr srcPtr = iter.Iter + MultiIterOffsets.off_dimensions; > retArr = new Int64[iter.nd]; > unsafe { > fixed (Int64* dimMem = retArr) { > lock (GlobalIterpLock) { > if (!GetIntpArray(srcPtr, iter.nd, dimMem)) { > throw new IronPython.Runtime.Exceptions.RuntimeException("Error getting iterator dimensions."); > } > } > } > } > return retArr; > } > > internal static ndarray NewFromDescr(dtype descr, long[] dims, long[] strides, > int flags, object interfaceData) { > if (interfaceData == null) { > Incref(descr.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NewFromDescrThunk(descr.Descr, dims.Length, flags, dims, strides, IntPtr.Zero, IntPtr.Zero)); > } > } else { > GCHandle h = AllocGCHandle(interfaceData); > try { > Incref(descr.Descr); > Monitor.Enter(GlobalIterpLock); > return DecrefToInterface(NewFromDescrThunk(descr.Descr, dims.Length, > flags, dims, strides, IntPtr.Zero, GCHandle.ToIntPtr(h))); > } finally { > Monitor.Exit(GlobalIterpLock); > FreeGCHandle(h); > } > } > } > > internal static ndarray NewFromDescr(dtype descr, long[] dims, long[] strides, IntPtr data, > int flags, object interfaceData) { > if (interfaceData == null) { > Incref(descr.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NewFromDescrThunk(descr.Descr, dims.Length, flags, dims, strides, data, IntPtr.Zero)); > } > } else { > GCHandle h = AllocGCHandle(interfaceData); > try { > Incref(descr.Descr); > Monitor.Enter(GlobalIterpLock); > return DecrefToInterface(NewFromDescrThunk(descr.Descr, dims.Length, > flags, dims, strides, IntPtr.Zero, GCHandle.ToIntPtr(h))); > } finally { > Monitor.Exit(GlobalIterpLock); > FreeGCHandle(h); > } > } > } > > internal static flatiter IterNew(ndarray ao) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_IterNew(ao.Array)); > } > } > > internal static ndarray IterSubscript(flatiter iter, NpyIndexes indexes) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_IterSubscript(iter.Iter, indexes.Indexes, indexes.NumIndexes)); > } > } > > internal static void IterSubscriptAssign(flatiter iter, NpyIndexes indexes, ndarray val) { > lock (GlobalIterpLock) { > if (NpyArray_IterSubscriptAssign(iter.Iter, indexes.Indexes, indexes.NumIndexes, val.Array) < 0) { > CheckError(); > } > } > } > > internal static ndarray FlatView(ndarray a) > { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_FlatView(a.Array) > ); > } > } > > > /// > /// Creates a multiterator > /// > /// Sequence of objects to iterate over > /// Pointer to core multi-iterator structure > internal static IntPtr MultiIterFromObjects(IEnumerable objs) { > return MultiIterFromArrays(objs.Select(x => NpyArray.FromAny(x))); > } > > internal static IntPtr MultiIterFromArrays(IEnumerable arrays) { > IntPtr[] coreArrays = arrays.Select(x => { Incref(x.Array); return x.Array; }).ToArray(); > IntPtr result; > > lock (GlobalIterpLock) { > result = NpyArrayAccess_MultiIterFromArrays(coreArrays, coreArrays.Length); > } > CheckError(); > return result; > } > > internal static ufunc GetNumericOp(NpyDefs.NpyArray_Ops op) { > IntPtr ufuncPtr; > > lock (GlobalIterpLock) { > ufuncPtr = NpyArray_GetNumericOp((int)op); > } > return ToInterface(ufuncPtr); > } > > #if NOTDEF > internal static object GenericUnaryOp(ndarray a1, ufunc f, ndarray ret = null) { > // TODO: We need to do the error handling and wrapping of outputs. > Incref(a1.Array); > Incref(f.UFunc); > if (ret != null) { > Incref(ret.Array); > } > IntPtr result = NpyArray_GenericUnaryFunction(a1.Array, f.UFunc, > (ret == null ? IntPtr.Zero : ret.Array)); > ndarray rval = DecrefToInterface(result); > Decref(a1.Array); > Decref(f.UFunc); > if (ret == null) { > return ndarray.ArrayReturn(rval); > } else { > Decref(ret.Array); > return rval; > } > } > > internal static object GenericBinaryOp(ndarray a1, ndarray a2, ufunc f, ndarray ret = null) { > //ndarray arr = new ndarray[] { a1, a2, ret }; > //return GenericFunction(f, arr, null); > // TODO: We need to do the error handling and wrapping of outputs. > Incref(f.UFunc); > > IntPtr result = NpyArray_GenericBinaryFunction(a1.Array, a2.Array, f.UFunc, > (ret == null ? IntPtr.Zero : ret.Array)); > ndarray rval = DecrefToInterface(result); > Decref(f.UFunc); > > if (ret == null) { > return ndarray.ArrayReturn(rval); > } else { > return rval; > } > } > > #endif > > internal static object GenericReduction(ufunc f, ndarray arr, > ndarray indices, ndarray ret, int axis, dtype otype, ufunc.ReduceOp op) { > if (indices != null) { > Incref(indices.Array); > } > > ndarray rval; > lock (GlobalIterpLock) { > rval = DecrefToInterface( > NpyUFunc_GenericReduction(f.UFunc, arr.Array, > (indices != null) ? indices.Array : IntPtr.Zero, > (ret != null) ? ret.Array : IntPtr.Zero, > axis, (otype != null) ? otype.Descr : IntPtr.Zero, (int)op)); > } > if (rval != null) { > // TODO: Call array wrap processing: ufunc_object.c:1011 > } > return ndarray.ArrayReturn(rval); > } > > internal class PrepareArgs > { > internal CodeContext cntx; > internal Action prepare; > internal object[] args; > internal Exception ex; > } > > internal static int PrepareCallback(IntPtr ufunc, IntPtr arrays, IntPtr prepare_args) { > PrepareArgs args = (PrepareArgs)GCHandleFromIntPtr(prepare_args).Target; > ufunc f = ToInterface(ufunc); > ndarray[] arrs = new ndarray[f.nargs]; > // Copy the data into the array > for (int i = 0; i < arrs.Length; i++) { > arrs[i] = DecrefToInterface(Marshal.ReadIntPtr(arrays, IntPtr.Size * i)); > } > try { > args.prepare(args.cntx, f, arrs, args.args); > } catch (Exception ex) { > args.ex = ex; > return -1; > } finally { > // Copy the arrays back > for (int i = 0; i < arrs.Length; i++) { > IntPtr coreArray = arrs[i].Array; > Incref(coreArray); > Marshal.WriteIntPtr(arrays, IntPtr.Size * i, arrs[i].Array); > } > } > return 0; > } > > internal static void GenericFunction(CodeContext cntx, ufunc f, ndarray[] arrays, NpyDefs.NPY_TYPES[] sig, > Action prepare_outputs, object[] args) { > // Convert the typenums > int[] rtypenums = null; > int ntypenums = 0; > if (sig != null) { > rtypenums = sig.Cast().ToArray(); > ntypenums = rtypenums.Length; > } > unsafe { > // Convert and INCREF the arrays > IntPtr* mps = stackalloc IntPtr[arrays.Length]; > for (int i = 0; i < arrays.Length; i++) { > ndarray a = arrays[i]; > if (a == null) { > mps[i] = IntPtr.Zero; > } else { > IntPtr p = a.Array; > NpyCoreApi.Incref(p); > mps[i] = p; > } > } > > if (prepare_outputs != null) { > PrepareArgs pargs = new PrepareArgs { cntx = cntx, prepare = prepare_outputs, args = args, ex = null }; > GCHandle h = AllocGCHandle(pargs); > try { > int val; > Incref(f.UFunc); > lock (GlobalIterpLock) { > val = NpyUFunc_GenericFunction(f.UFunc, f.nargs, mps, ntypenums, rtypenums, 0, > PrepareCallback, GCHandle.ToIntPtr(h)); > } > if (val < 0) { > CheckError(); > if (pargs.ex != null) { > throw pargs.ex; > } > } > } finally { > // Release the handle > FreeGCHandle(h); > // Convert the args back. > for (int i = 0; i < arrays.Length; i++) { > if (mps[i] != IntPtr.Zero) { > arrays[i] = DecrefToInterface(mps[i]); > } else { > arrays[i] = null; > } > } > Decref(f.UFunc); > } > } else { > try { > Incref(f.UFunc); > Monitor.Enter(GlobalIterpLock); > if (NpyUFunc_GenericFunction(f.UFunc, f.nargs, mps, ntypenums, rtypenums, 0, > null, IntPtr.Zero) < 0) { > CheckError(); > } > } finally { > Monitor.Exit(GlobalIterpLock); > // Convert the args back. > for (int i = 0; i < arrays.Length; i++) { > if (mps[i] != IntPtr.Zero) { > arrays[i] = DecrefToInterface(mps[i]); > } else { > arrays[i] = null; > } > } > Decref(f.UFunc); > } > } > } > } > > internal static ndarray Byteswap(ndarray arr, bool inplace) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_Byteswap(arr.Array, inplace ? (byte)1 : (byte)0)); > } > } > > public static ndarray CastToType(ndarray arr, dtype d, bool fortran) { > Incref(d.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_CastToType(arr.Array, d.Descr, (fortran ? 1 : 0))); > } > } > > internal static ndarray CheckAxis(ndarray arr, ref int axis, int flags) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_CheckAxis(arr.Array, ref axis, flags)); > } > } > > internal static void CopyAnyInto(ndarray dest, ndarray src) { > lock (GlobalIterpLock) { > if (NpyArray_CopyAnyInto(dest.Array, src.Array) < 0) { > CheckError(); > } > } > } > > internal static void DescrDestroyFields(IntPtr fields) { > lock (GlobalIterpLock) { > NpyDict_Destroy(fields); > } > } > > > internal static ndarray GetField(ndarray arr, dtype d, int offset) { > Incref(d.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_GetField(arr.Array, d.Descr, offset)); > } > } > > internal static ndarray GetImag(ndarray arr) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_GetImag(arr.Array)); > } > } > > internal static ndarray GetReal(ndarray arr) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_GetReal(arr.Array)); > } > } > internal static ndarray GetField(ndarray arr, string name) { > NpyArray_DescrField field = GetDescrField(arr.Dtype, name); > dtype field_dtype = ToInterface(field.descr); > return GetField(arr, field_dtype, field.offset); > } > > internal static ndarray Newshape(ndarray arr, IntPtr[] dims, NpyDefs.NPY_ORDER order) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArrayAccess_Newshape(arr.Array, dims.Length, dims, (int)order)); > } > } > > internal static void SetShape(ndarray arr, IntPtr[] dims) { > lock (GlobalIterpLock) { > if (NpyArrayAccess_SetShape(arr.Array, dims.Length, dims) < 0) { > CheckError(); > } > } > } > > internal static void SetState(ndarray arr, IntPtr[] dims, NpyDefs.NPY_ORDER order, string rawdata) { > lock (GlobalIterpLock) { > NpyArrayAccess_SetState(arr.Array, dims.Length, dims, (int)order, rawdata, (rawdata != null) ? rawdata.Length : 0); > } > CheckError(); > } > > > internal static ndarray NewView(dtype d, int nd, IntPtr[] dims, IntPtr[] strides, > ndarray arr, IntPtr offset, bool ensure_array) { > Incref(d.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_NewView(d.Descr, nd, dims, strides, arr.Array, offset, ensure_array ? 1 : 0)); > } > } > > /// > /// Returns a copy of the passed array in the specified order (C, Fortran) > /// > /// Array to copy > /// Desired order > /// New array > internal static ndarray NewCopy(ndarray arr, NpyDefs.NPY_ORDER order) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_NewCopy(arr.Array, (int)order)); > } > } > > internal static NpyDefs.NPY_TYPES TypestrConvert(int elsize, byte letter) { > lock (GlobalIterpLock) { > return (NpyDefs.NPY_TYPES)NpyArray_TypestrConvert(elsize, (int)letter); > } > } > > internal static void AddField(IntPtr fields, IntPtr names, int i, > string name, dtype fieldType, int offset, string title) { > Incref(fieldType.Descr); > lock (GlobalIterpLock) { > if (NpyArrayAccess_AddField(fields, names, i, name, fieldType.Descr, offset, title) < 0) { > CheckError(); > } > } > } > > internal static NpyArray_DescrField GetDescrField(dtype d, string name) { > NpyArray_DescrField result; > lock (GlobalIterpLock) { > if (NpyArrayAccess_GetDescrField(d.Descr, name, out result) < 0) { > throw new ArgumentException(String.Format("Field {0} does not exist", name)); > } > } > return result; > } > > internal static dtype DescrNewVoid(IntPtr fields, IntPtr names, int elsize, int flags, int alignment) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArrayAccess_DescrNewVoid(fields, names, elsize, flags, alignment)); > } > } > > internal static dtype DescrNewSubarray(dtype basetype, IntPtr[] shape) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArrayAccess_DescrNewSubarray(basetype.Descr, shape.Length, shape)); > } > } > > internal static dtype DescrNew(dtype d) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_DescrNew(d.Descr)); > } > } > > internal static void GetBytes(ndarray arr, byte[] bytes, NpyDefs.NPY_ORDER order) { > lock (GlobalIterpLock) { > if (NpyArrayAccess_GetBytes(arr.Array, bytes, bytes.LongLength, (int)order) < 0) { > CheckError(); > } > } > } > > internal static void FillWithObject(ndarray arr, object obj) { > GCHandle h = AllocGCHandle(obj); > try { > Monitor.Enter(GlobalIterpLock); > if (NpyArray_FillWithObject(arr.Array, GCHandle.ToIntPtr(h)) < 0) { > CheckError(); > } > } finally { > Monitor.Exit(GlobalIterpLock); > FreeGCHandle(h); > } > } > > internal static void FillWithScalar(ndarray arr, ndarray zero_d_array) { > lock (GlobalIterpLock) { > if (NpyArray_FillWithScalar(arr.Array, zero_d_array.Array) < 0) { > CheckError(); > } > } > } > > internal static ndarray View(ndarray arr, dtype d, object subtype) { > IntPtr descr = (d == null ? IntPtr.Zero : d.Descr); > if (descr != IntPtr.Zero) { > Incref(descr); > } > if (subtype != null) { > GCHandle h = AllocGCHandle(subtype); > try { > Monitor.Enter(GlobalIterpLock); > return DecrefToInterface( > NpyArray_View(arr.Array, descr, GCHandle.ToIntPtr(h))); > } finally { > Monitor.Exit(GlobalIterpLock); > FreeGCHandle(h); > } > } > else { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_View(arr.Array, descr, IntPtr.Zero)); > } > } > } > > internal static ndarray ViewLike(ndarray arr, ndarray proto) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArrayAccess_ViewLike(arr.Array, proto.Array)); > } > } > > internal static ndarray Subarray(ndarray self, IntPtr dataptr) { > lock (GlobalIterpLock) { > return DecrefToInterface(NpyArray_Subarray(self.Array, dataptr)); > } > } > > internal static dtype DescrNewByteorder(dtype d, char order) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_DescrNewByteorder(d.Descr, (byte)order)); > } > } > > internal static void UpdateFlags(ndarray arr, int flagmask) { > lock (GlobalIterpLock) { > NpyArray_UpdateFlags(arr.Array, flagmask); > } > } > > /// > /// Calls the fill function on the array dtype. This takes the first 2 values in the array and fills the array > /// so the difference between each pair of elements is the same. > /// > /// > internal static void Fill(ndarray arr) { > lock (GlobalIterpLock) { > if (NpyArrayAccess_Fill(arr.Array) < 0) { > CheckError(); > } > } > } > > internal static void SetDateTimeInfo(dtype d, string units, int num, int den, int events) { > lock (GlobalIterpLock) { > if (NpyArrayAccess_SetDateTimeInfo(d.Descr, units, num, den, events) < 0) { > CheckError(); > } > } > } > > internal static dtype InheritDescriptor(dtype t1, dtype other) { > lock (GlobalIterpLock) { > return DecrefToInterface(NpyArrayAccess_InheritDescriptor(t1.Descr, other.Descr)); > } > } > > internal static bool EquivTypes(dtype d1, dtype d2) { > lock (GlobalIterpLock) { > return NpyArray_EquivTypes(d1.Descr, d2.Descr) != 0; > } > } > > internal static bool CanCastTo(dtype d1, dtype d2) { > lock (GlobalIterpLock) { > return NpyArray_CanCastTo(d1.Descr, d2.Descr); > } > } > > /// > /// Returns the PEP 3118 format encoding for the type of an array. > /// > /// Array to get the format string for > /// Format string > internal static string GetBufferFormatString(ndarray arr) { > IntPtr ptr; > lock (GlobalIterpLock) { > ptr = NpyArrayAccess_GetBufferFormatString(arr.Array); > } > > String s = Marshal.PtrToStringAnsi(ptr); > lock (GlobalIterpLock) { > NpyArrayAccess_Free(ptr); // ptr was allocated with malloc, not SysStringAlloc - don't use automatic marshalling > } > return s; > } > > > /// > /// Reads the specified text or binary file and produces an array from the content. Currently only > /// the file name is allowed and not a PythonFile or Stream type due to limitations in the core > /// (assumes FILE *). > /// > /// File to read > /// Type descriptor for the resulting array > /// Number of elements to read, less than zero reads all available > /// Element separator string for text files, null for binary files > /// Array of file contents > internal static ndarray ArrayFromFile(string fileName, dtype type, int count, string sep) { > lock (GlobalIterpLock) { > return DecrefToInterface(NpyArrayAccess_FromFile(fileName, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); > } > } > > > internal static ndarray ArrayFromString(string data, dtype type, int count, string sep) { > if (type != null) Incref(type.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface(NpyArray_FromString(data, (IntPtr)data.Length, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); > } > } > > internal static ndarray ArrayFromBytes(byte[] data, dtype type, int count, string sep) { > if (type != null) Incref(type.Descr); > lock (GlobalIterpLock) { > return DecrefToInterface(NpyArray_FromBytes(data, (IntPtr)data.Length, (type != null) ? type.Descr : IntPtr.Zero, count, sep)); > } > } > > internal static ndarray CompareStringArrays(ndarray a1, ndarray a2, NpyDefs.NPY_COMPARE_OP op, > bool rstrip = false) { > lock (GlobalIterpLock) { > return DecrefToInterface( > NpyArray_CompareStringArrays(a1.Array, a2.Array, (int)op, rstrip ? 1 : 0)); > } > } > > // API Defintions: every native call is private and must currently be wrapped by a function > // that at least holds the global interpreter lock (GlobalInterpLock). > internal static int ElementStrides(ndarray arr) { lock (GlobalIterpLock) { return NpyArray_ElementStrides(arr.Array); } } > > internal static IntPtr ArraySubscript(ndarray arr, NpyIndexes indexes) { > lock (GlobalIterpLock) { return NpyArray_Subscript(arr.Array, indexes.Indexes, indexes.NumIndexes); } > } > > internal static void IndexDealloc(NpyIndexes indexes) { > lock (GlobalIterpLock) { NpyArray_IndexDealloc(indexes.Indexes, indexes.NumIndexes); } > } > > internal static IntPtr ArraySize(ndarray arr) { > lock (GlobalIterpLock) { return NpyArray_Size(arr.Array); } > } > > /// > /// Indexes an array by a single long and returns the sub-array. > /// > /// The index into the array. > /// The sub-array. > internal static ndarray ArrayItem(ndarray arr, long index) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_ArrayItem(arr.Array, (IntPtr)index)); > } > } > > internal static ndarray IndexSimple(ndarray arr, NpyIndexes indexes) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_IndexSimple(arr.Array, indexes.Indexes, indexes.NumIndexes)); > } > } > > internal static int IndexFancyAssign(ndarray dest, NpyIndexes indexes, ndarray values) { > lock (GlobalIterpLock) { return NpyArray_IndexFancyAssign(dest.Array, indexes.Indexes, indexes.NumIndexes, values.Array); } > } > > internal static int SetField(ndarray arr, IntPtr dtype, int offset, ndarray srcArray) { > lock (GlobalIterpLock) { return NpyArray_SetField(arr.Array, dtype, offset, srcArray.Array); } > } > > internal static void SetNumericOp(int op, ufunc ufunc) { > lock (GlobalIterpLock) { NpyArray_SetNumericOp(op, ufunc.UFunc); } > } > > internal static ndarray ArrayAll(ndarray arr, int axis, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_All(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray ArrayAny(ndarray arr, int axis, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Any(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray ArrayArgMax(ndarray self, int axis, ndarray ret) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_ArgMax(self.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray ArgSort(ndarray arr, int axis, NpyDefs.NPY_SORTKIND sortkind) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_ArgSort(arr.Array, axis, (int)sortkind)); > } > } > > internal static int ArrayBool(ndarray arr) { > lock (GlobalIterpLock) { return NpyArray_Bool(arr.Array); } > } > > internal static int ScalarKind(int typenum, ndarray arr) { > lock (GlobalIterpLock) { return NpyArray_ScalarKind(typenum, arr.Array); } > } > > internal static ndarray Choose(ndarray sel, ndarray[] arrays, ndarray ret = null, NpyDefs.NPY_CLIPMODE clipMode = NpyDefs.NPY_CLIPMODE.NPY_RAISE) { > lock (GlobalIterpLock) { > IntPtr[] coreArrays = arrays.Select(x => x.Array).ToArray(); > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_Choose(sel.Array, coreArrays, coreArrays.Length, > ret == null ? IntPtr.Zero : ret.Array, (int)clipMode)); > } > } > > internal static ndarray Conjugate(ndarray arr, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyArray_Conjugate(arr.Array, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray Correlate(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES typenum, int mode) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyArray_Correlate(arr1.Array, arr2.Array, (int)typenum, mode)); > } > } > > internal static ndarray Correlate2(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES typenum, int mode) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyArray_Correlate2(arr1.Array, arr2.Array, (int)typenum, mode)); > } > } > > internal static ndarray CopyAndTranspose(ndarray arr) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyArray_CopyAndTranspose(arr.Array)); > } > } > > internal static ndarray CumProd(ndarray arr, int axis, dtype rtype, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_CumProd(arr.Array, axis, > (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), > (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray CumSum(ndarray arr, int axis, dtype rtype, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_CumSum(arr.Array, axis, > (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), > (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static void DestroySubarray(IntPtr subarrayPtr) { > lock (GlobalIterpLock) { NpyArray_DestroySubarray(subarrayPtr); } > } > > internal static int DescrFindObjectFlag(dtype type) { > lock (GlobalIterpLock) { return NpyArray_DescrFindObjectFlag(type.Descr); } > } > > internal static ndarray Flatten(ndarray arr, NpyDefs.NPY_ORDER order) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_Flatten(arr.Array, (int)order)); > } > > internal static ndarray InnerProduct(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES type) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_InnerProduct(arr1.Array, arr2.Array, (int)type)); > } > } > > internal static ndarray LexSort(ndarray[] arrays, int axis) { > int n = arrays.Length; > IntPtr[] coreArrays = arrays.Select(x => x.Array).ToArray(); > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_LexSort(coreArrays, n, axis)); > } > } > > internal static ndarray MatrixProduct(ndarray arr1, ndarray arr2, NpyDefs.NPY_TYPES type) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyArray_MatrixProduct(arr1.Array, arr2.Array, (int)type)); > } > } > > internal static ndarray ArrayMax(ndarray arr, int axis, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Max(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray ArrayMin(ndarray arr, int axis, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Min(arr.Array, axis, (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static ndarray[] NonZero(ndarray arr) { > int nd = arr.ndim; > IntPtr[] coreArrays = new IntPtr[nd]; > GCHandle h = NpyCoreApi.AllocGCHandle(arr); > try { > Monitor.Enter(GlobalIterpLock); > if (NpyCoreApi.NpyArray_NonZero(arr.Array, coreArrays, GCHandle.ToIntPtr(h)) < 0) { > NpyCoreApi.CheckError(); > } > } finally { > Monitor.Exit(GlobalIterpLock); > NpyCoreApi.FreeGCHandle(h); > } > return coreArrays.Select(x => NpyCoreApi.DecrefToInterface(x)).ToArray(); > } > > internal static ndarray Prod(ndarray arr, int axis, dtype rtype, ndarray ret = null) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_Prod(arr.Array, axis, > (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), > (ret == null ? IntPtr.Zero : ret.Array))); > } > } > > internal static int PutMask(ndarray arr, ndarray values, ndarray mask) { > lock (GlobalIterpLock) { > return NpyArray_PutMask(arr.Array, values.Array, mask.Array); > } > } > > internal static int PutTo(ndarray arr, ndarray values, ndarray indices, NpyDefs.NPY_CLIPMODE clipmode) { > lock (GlobalIterpLock) { > return NpyArray_PutTo(arr.Array, values.Array, indices.Array, (int)clipmode); > } > } > > > internal static ndarray Ravel(ndarray arr, NpyDefs.NPY_ORDER order) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Ravel(arr.Array, (int)order)); > } > } > > internal static ndarray Repeat(ndarray arr, ndarray repeats, int axis) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Repeat(arr.Array, repeats.Array, axis)); > } > } > > internal static ndarray Searchsorted(ndarray arr, ndarray keys, NpyDefs.NPY_SEARCHSIDE side) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_SearchSorted(arr.Array, keys.Array, (int)side)); > } > } > > internal static void Sort(ndarray arr, int axis, NpyDefs.NPY_SORTKIND sortkind) { > lock (GlobalIterpLock) { > if (NpyCoreApi.NpyArray_Sort(arr.Array, axis, (int)sortkind) < 0) { > NpyCoreApi.CheckError(); > } > } > } > > internal static ndarray Squeeze(ndarray arr) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_Squeeze(arr.Array)); > } > } > > internal static ndarray Sum(ndarray arr, int axis, dtype rtype, ndarray ret = null) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_Sum(arr.Array, axis, > (int)(rtype == null ? NpyDefs.NPY_TYPES.NPY_NOTYPE : rtype.TypeNum), > (ret == null ? IntPtr.Zero : ret.Array))); > } > > internal static ndarray SwapAxis(ndarray arr, int a1, int a2) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface(NpyCoreApi.NpyArray_SwapAxes(arr.Array, a1, a2)); > } > } > > internal static ndarray TakeFrom(ndarray arr, ndarray indices, int axis, ndarray ret, NpyDefs.NPY_CLIPMODE clipMode) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArray_TakeFrom(arr.Array, indices.Array, axis, (ret != null ? ret.Array : IntPtr.Zero), (int)clipMode) > ); > } > } > > internal static bool DescrIsNative(dtype type) { > lock (GlobalIterpLock) { > return NpyCoreApi.DescrIsNative(type.Descr) != 0; > } > } > > #endregion > > > #region C API Definitions > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_DescrNew(IntPtr descr); > internal static IntPtr DescrNewRaw(dtype d) { > lock (GlobalIterpLock) { return NpyArray_DescrNew(d.Descr); } > } > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_DescrFromType(Int32 type); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_SmallType(IntPtr descr1, IntPtr descr2); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern byte NpyArray_EquivTypes(IntPtr t1, IntPtr typ2); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_ElementStrides(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_MoveInto(IntPtr dest, IntPtr src); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_FromArray(IntPtr arr, IntPtr descr, int flags); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //private static extern void NpyArray_dealloc(IntPtr arr); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //private static extern void NpyArray_DescrDestroy(IntPtr arr); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //internal static extern void NpyArray_DescrDeallocNamesAndFields(IntPtr dtype); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Subarray(IntPtr arr, IntPtr dataptr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Subscript(IntPtr arr, IntPtr indexes, int n); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //private static extern int NpyArray_SubscriptAssign(IntPtr self, IntPtr indexes, int n, IntPtr value); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArray_IndexDealloc(IntPtr indexes, int n); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Size(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_ArrayItem(IntPtr array, IntPtr index); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_IndexSimple(IntPtr arr, IntPtr indexes, int n); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_IndexFancyAssign(IntPtr dest, IntPtr indexes, int n, IntPtr value_array); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_SetField(IntPtr arr, IntPtr descr, int offset, IntPtr val); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int Npy_IsAligned(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int Npy_IsWriteable(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_IterNew(IntPtr ao); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_IterSubscript(IntPtr iter, IntPtr indexes, int n); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_IterSubscriptAssign(IntPtr iter, IntPtr indexes, int n, IntPtr array_val); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_FillWithObject(IntPtr arr, IntPtr obj); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_FillWithScalar(IntPtr arr, IntPtr zero_d_array); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_FlatView(IntPtr arr); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //private static extern void npy_ufunc_dealloc(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_GetNumericOp(int op); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArray_SetNumericOp(int op, IntPtr ufunc); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //internal static extern IntPtr NpyArray_GenericUnaryFunction(IntPtr arr1, IntPtr ufunc, IntPtr ret); > > //[DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > //internal static extern IntPtr NpyArray_GenericBinaryFunction(IntPtr arr1, IntPtr arr2, IntPtr ufunc, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_All(IntPtr self, int axis, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Any(IntPtr self, int axis, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_ArgMax(IntPtr self, int axis, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_ArgSort(IntPtr arr, int axis, int sortkind); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_Bool(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_ScalarKind(int typenum, IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Byteswap(IntPtr arr, byte inplace); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern bool NpyArray_CanCastTo(IntPtr fromDtype, IntPtr toDtype); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CastToType(IntPtr array, IntPtr descr, int fortran); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CheckAxis(IntPtr arr, ref int axis, > int flags); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Choose(IntPtr array, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 2)]IntPtr[] mps, int n, IntPtr ret, int clipMode); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CompareStringArrays(IntPtr a1, IntPtr a2, > int op, int rstrip); > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Conjugate(IntPtr arr, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Correlate(IntPtr arr1, IntPtr arr2, int typenum, int mode); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Correlate2(IntPtr arr1, IntPtr arr2, int typenum, int mode); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CopyAndTranspose(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_CopyAnyInto(IntPtr dest, IntPtr src); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CumProd(IntPtr arr, int axis, int > rtype, IntPtr ret); > > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_CumSum(IntPtr arr, int axis, int > rtype, IntPtr ret); > > // Reentrant - does not need to be wrapped. > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DescrAllocNames")] > internal static extern IntPtr DescrAllocNames(int n); > > // Reentrant - does not need to be wrapped. > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DescrAllocFields")] > internal static extern IntPtr DescrAllocFields(); > > /// > /// Deallocates a subarray block. The pointer passed in is descr->subarray, not > /// a descriptor object itself. > /// > /// Subarray structure > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArray_DestroySubarray(IntPtr subarrayPtr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, > EntryPoint="npy_descr_find_object_flag")] > private static extern int NpyArray_DescrFindObjectFlag(IntPtr subarrayPtr); > > // Reentrant -- does not need to be wrapped. > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArray_DateTimeInfoNew")] > internal static extern IntPtr DateTimeInfoNew(string units, int num, int den, int events); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_DescrNewByteorder(IntPtr descr, byte order); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Flatten(IntPtr arr, int order); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_GetField(IntPtr arr, IntPtr dtype, int offset); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_GetImag(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_GetReal(IntPtr arr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_InnerProduct(IntPtr arr, IntPtr arr2, int type); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_LexSort( > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] mps, int n, int axis); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_MatrixProduct(IntPtr arr, IntPtr arr2, int type); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Max(IntPtr arr, int axis, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Min(IntPtr arr, int axis, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_NewCopy(IntPtr arr, int order); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_NewView(IntPtr descr, int nd, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] strides, > IntPtr arr, IntPtr offset, int ensureArray); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_NonZero(IntPtr self, > [MarshalAs(UnmanagedType.LPArray,SizeConst=NpyDefs.NPY_MAXDIMS)] IntPtr[] index_arrays, > IntPtr obj); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Prod(IntPtr arr, int axis, int > rtype, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_PutMask(IntPtr arr, IntPtr values, IntPtr mask); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_PutTo(IntPtr arr, IntPtr values, IntPtr indices, int clipmode); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Ravel(IntPtr arr, int fortran); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Repeat(IntPtr arr, IntPtr repeats, int axis); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_SearchSorted(IntPtr op1, IntPtr op2, int side); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_Sort(IntPtr arr, int axis, int sortkind); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Squeeze(IntPtr self); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_Sum(IntPtr arr, int axis, int rtype, IntPtr ret); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_SwapAxes(IntPtr arr, int a1, int a2); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_TakeFrom(IntPtr self, IntPtr indices, int axis, IntPtr ret, int clipMode); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArray_TypestrConvert(int itemsize, int gentype); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArray_UpdateFlags(IntPtr arr, int flagmask); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_View(IntPtr arr, IntPtr descr, IntPtr subtype); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyDict_Destroy(IntPtr dict); > > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > internal delegate int del_PrepareOutputs(IntPtr ufunc, IntPtr arrays, IntPtr args); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static unsafe extern int NpyUFunc_GenericFunction(IntPtr func, int nargs, IntPtr* mps, > int ntypenums, [In][MarshalAs(UnmanagedType.LPArray)] int[] rtypenums, > int originalObjectWasArray, del_PrepareOutputs npy_prepare_outputs_func, IntPtr prepare_out_args); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyUFunc_GenericReduction(IntPtr ufunc, > IntPtr arr, IntPtr indices, IntPtr arrOut, int axis, IntPtr descr, > int operation); > > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > internal unsafe delegate void del_GetErrorState(int* bufsizep, int* maskp, IntPtr* objp); > > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > internal unsafe delegate void del_ErrorHandler(sbyte* name, int errormask, IntPtr errobj, int retstatus, int* first); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyUFunc_SetFpErrFuncs(del_GetErrorState errorState, del_ErrorHandler handler); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArray_FromString(string data, IntPtr len, IntPtr dtype, int num, string sep); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArray_FromString")] > private static extern IntPtr NpyArray_FromBytes(byte[] data, IntPtr len, IntPtr dtype, int num, string sep); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint="npy_arraydescr_isnative")] > private static extern int DescrIsNative(IntPtr descr); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl)] > private static extern void npy_initlib(IntPtr functionDefs, IntPtr wrapperFuncs, > IntPtr error_set, IntPtr error_occured, IntPtr error_clear, > IntPtr cmp_priority, IntPtr incref, IntPtr decref, > IntPtr enable_thread, IntPtr disable_thread); > > [DllImport("ndarray", CallingConvention = CallingConvention.Cdecl, EntryPoint = "npy_add_sortfuncs")] > private static extern IntPtr NpyArray_InitSortModule(); > > #endregion > > #region NpyAccessLib functions > > internal static void ArraySetDescr(ndarray arr, dtype newDescr) { > lock (GlobalIterpLock) { NpyArrayAccess_ArraySetDescr(arr.Array, newDescr.Descr); } > } > > internal static long GetArrayStride(ndarray arr, int dims) { > lock (GlobalIterpLock) { > return NpyCoreApi.NpyArrayAccess_GetArrayStride(arr.Array, dims); > } > } > > internal static int BindIndex(ndarray arr, NpyIndexes indexes, NpyIndexes result) { > lock (GlobalIterpLock) { > return NpyCoreApi.NpyArrayAccess_BindIndex(arr.Array, indexes.Indexes, indexes.NumIndexes, result.Indexes); > } > } > > internal static int GetFieldOffset(dtype descr, string fieldName, out IntPtr descrPtr) { > lock (GlobalIterpLock) { > return NpyCoreApi.NpyArrayAccess_GetFieldOffset(descr.Descr, fieldName, out descrPtr); > } > } > > internal static void Resize(ndarray arr, IntPtr[] newshape, bool refcheck, NpyDefs.NPY_ORDER order) { > lock (GlobalIterpLock) { > if (NpyCoreApi.NpyArrayAccess_Resize(arr.Array, newshape.Length, newshape, (refcheck ? 1 : 0), (int)order) < 0) { > NpyCoreApi.CheckError(); > } > } > } > > internal static ndarray Transpose(ndarray arr, IntPtr[] permute) { > lock (GlobalIterpLock) { > return NpyCoreApi.DecrefToInterface( > NpyCoreApi.NpyArrayAccess_Transpose(arr.Array, (permute != null) ? permute.Length : 0, permute)); > } > } > > internal static void ClearUPDATEIFCOPY(ndarray arr) { > lock (GlobalIterpLock) { > NpyArrayAccess_ClearUPDATEIFCOPY(arr.Array); > } > } > > > internal static IntPtr IterNext(IntPtr corePtr) { > lock (GlobalIterpLock) { > return NpyArrayAccess_IterNext(corePtr); > } > } > > internal static void IterReset(IntPtr iter) { > lock (GlobalIterpLock) { > NpyArrayAccess_IterReset(iter); > } > } > > internal static IntPtr IterGoto1D(flatiter iter, IntPtr index) { > lock (GlobalIterpLock) { > return NpyArrayAccess_IterGoto1D(iter.Iter, index); > } > } > > internal static IntPtr IterCoords(flatiter iter) { > lock (GlobalIterpLock) { > return NpyArrayAccess_IterCoords(iter.Iter); > } > } > > internal static void DescrReplaceSubarray(dtype descr, dtype baseDescr, IntPtr[] dims) { > lock (GlobalIterpLock) { > NpyArrayAccess_DescrReplaceSubarray(descr.Descr, baseDescr.Descr, dims.Length, dims); > } > } > > internal static void DescrReplaceFields(dtype descr, IntPtr namesPtr, IntPtr fieldsDict) { > lock (GlobalIterpLock) { > NpyArrayAccess_DescrReplaceFields(descr.Descr, namesPtr, fieldsDict); > } > } > > internal static void ZeroFill(ndarray arr, IntPtr offset) { > lock (GlobalIterpLock) { > NpyArrayAccess_ZeroFill(arr.Array, offset); > } > } > > /// > /// Allocates a block of memory using NpyDataMem_NEW that is the same size as a single > /// array element and zeros the bytes. This is usually good enough, but is not a correct > /// zero for object arrays. The caller must free the memory with NpyDataMem_FREE(). > /// > /// Array to take the element size from > /// Pointer to zero'd memory > internal static IntPtr DupZeroElem(ndarray arr) { > lock (GlobalIterpLock) { > return NpyArrayAccess_DupZeroElem(arr.Array); > } > } > > internal unsafe static void CopySwapIn(ndarray arr, long offset, void* data, bool swap) { > lock (GlobalIterpLock) { > NpyArrayAccess_CopySwapIn(arr.Array, offset, data, swap ? 1 : 0); > } > } > > internal unsafe static void CopySwapOut(ndarray arr, long offset, void* data, bool swap) { > lock (GlobalIterpLock) { > NpyArrayAccess_CopySwapOut(arr.Array, offset, data, swap ? 1 : 0); > } > } > > internal unsafe static void CopySwapScalar(dtype dtype, void* dest, void* src, bool swap) { > lock (GlobalIterpLock) { > NpyArrayAccess_CopySwapScalar(dtype.Descr, dest, src, swap); > } > } > > internal static void SetNamesList(dtype descr, string[] nameslist) { > lock (GlobalIterpLock) { > NpyArrayAccess_SetNamesList(descr.Descr, nameslist, nameslist.Length); > } > } > > /// > /// Deallocates the core data structure. The obj IntRef is no longer valid after this > /// point and there must not be any existing internal core references to this object > /// either. > /// > /// Core NpyObject instance to deallocate > internal static void Dealloc(IntPtr obj) { > lock(GlobalIterpLock) { > NpyArrayAccess_Dealloc(obj); > } > } > > > /// > /// Constructs a native ufunc object from a Python function. The inputs define the > /// number of arguments taken, number of outputs, and function name. The pyLoopFunc > /// function implements the iteration over a given array and should always by PyUFunc_Om_On. > /// pyFunc is the actual function object to call. > /// > /// Number of input arguments > /// Number of result values (a PythonTuple if > 1) > /// Name of the function > /// PyUFunc_Om_On, implements looping over the array > /// Function to call > internal static IntPtr UFuncFromPyFunc(int nin, int nout, String funcName, > IntPtr pyWrapperFunc, IntPtr pyFunc) { > lock (GlobalIterpLock) { > return NpyUFuncAccess_UFuncFromPyFunc(nin, nout, funcName, pyWrapperFunc, pyFunc); > } > } > > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > internal static extern void NpyUFuncAccess_Init(IntPtr funcDict, > IntPtr funcDefs, IntPtr callMethodFunc, IntPtr addToDictFunc); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_ArraySetDescr(IntPtr array, IntPtr newDescr); > > /// > /// Increments the reference count of the core object. This routine is re-entrant and > /// locking is handled at the bottom layer. > /// > /// Pointer to the core object to increment reference count to > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint="NpyArrayAccess_Incref")] > internal static extern void Incref(IntPtr obj); > > /// > /// Decrements the reference count of the core object. This can trigger the release of > /// the reference to the managed wrapper and eventually trigger a garbage collection of > /// the object. If the core object does not have a managed wrapper, this can trigger the > /// immediate destruction of the core object. > /// > /// This function is re-entrant/thread-safe. > /// > /// Pointer to the core object > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint="NpyArrayAccess_Decref")] > internal static extern void Decref(IntPtr obj); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_GetNativeTypeInfo")] > private static extern byte GetNativeTypeInfo(out int intSize, > out int longsize, out int longLongSize, out int longDoubleSize); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_GetIntpArray")] > unsafe private static extern bool GetIntpArray(IntPtr srcPtr, int len, Int64 *dimMem); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_AllocArray(IntPtr descr, int nd, > [In][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=1)] long[] dims, bool fortran); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern long NpyArrayAccess_GetArrayStride(IntPtr arr, int dims); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_BindIndex(IntPtr arr, IntPtr indexes, int n, IntPtr bound_indexes); > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArray_DescrField > { > internal IntPtr descr; > internal int offset; > internal IntPtr title; > } > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_GetDescrField(IntPtr descr, > [In][MarshalAs(UnmanagedType.LPStr)]string name, out NpyArray_DescrField field); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_GetFieldOffset(IntPtr descr, [MarshalAs(UnmanagedType.LPStr)] string fieldName, out IntPtr out_descr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_MultiIterFromArrays([MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] arrays, int n); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_Newshape(IntPtr arr, int ndim, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, > int order); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_SetShape(IntPtr arr, int ndim, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_SetState(IntPtr arr, int ndim, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]IntPtr[] dims, int order, > // Note string is marshalled as LPWStr (16-bit unicode) to avoid making a copy of it > [MarshalAsAttribute(UnmanagedType.LPWStr)]string rawdata, int rawLength); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_Resize(IntPtr arr, int ndim, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] newshape, int resize, int fortran); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_Transpose(IntPtr arr, int ndim, > [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] IntPtr[] permute); > > /// > /// Returns the current ABI version. Re-entrant, does not need locking. > /// > /// current version # > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_GetAbiVersion")] > internal static extern float GetAbiVersion(); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_ClearUPDATEIFCOPY(IntPtr arr); > > > /// > /// Deallocates an NpyObject. Thread-safe. > /// > /// The object to deallocate > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_Dealloc(IntPtr obj); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_IterNext")] > private static extern IntPtr NpyArrayAccess_IterNext(IntPtr iter); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_IterReset")] > private static extern void NpyArrayAccess_IterReset(IntPtr iter); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_IterGoto1D")] > private static extern IntPtr NpyArrayAccess_IterGoto1D(IntPtr iter, IntPtr index); > > // Re-entrant > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_IterArray")] > internal static extern IntPtr IterArray(IntPtr iter); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_IterCoords(IntPtr iter); > > // > // Offset functions - these return the offsets to fields in native structures > // as a workaround for not being able to include the C header file. > // > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_ArrayGetOffsets")] > private static extern void ArrayGetOffsets(out int magicNumOffset, > out int descrOffset, out int ndOffset, out int dimensionsOffset, > out int stridesOffset, out int flagsOffset, out int dataOffset, > out int baseObjOffset, out int baseArrayOffset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_DescrGetOffsets")] > private static extern void DescrGetOffsets(out int magicNumOffset, > out int kindOffset, out int typeOffset, out int byteorderOffset, > out int flagsOffset, out int typenumOffset, out int elsizeOffset, > out int alignmentOffset, out int namesOFfset, out int subarrayOffset, > out int fieldsOffset, out int dtinfoOffset, out int fieldsOffsetOffset, > out int fieldsDescrOffset, out int fieldsTitleOffset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_IterGetOffsets")] > private static extern void IterGetOffsets(out int sizeOffset, out int indexOffset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_MultiIterGetOffsets")] > private static extern void MultiIterGetOffsets(out int numiterOffset, out int sizeOffset, > out int indexOffset, out int ndOffset, out int dimensionsOffset, out int itersOffset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_UFuncGetOffsets")] > private static extern void UFuncGetOffsets(out int ninOffset, > out int noutOffset, out int nargsOffset, out int coreEnabledOffset, > out int identifyOffset, out int ntypesOffset, out int checkRetOffset, > out int nameOffset, out int typesOffset, out int coreSigOffset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_GetIndexInfo")] > private static extern void GetIndexInfo(out int unionOffset, out int indexSize, out int maxDims); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, > EntryPoint = "NpyArrayAccess_NewFromDescrThunk")] > private static extern IntPtr NewFromDescrThunk(IntPtr descr, int nd, int flags, > [In][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=1)] long[] dims, > [In][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=1)] long[] strides, IntPtr data, IntPtr interfaceData); > > // Thread-safe. > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_DescrDestroyNames")] > internal static extern void DescrDestroyNames(IntPtr p, int n); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_AddField(IntPtr fields, IntPtr names, int i, > [MarshalAs(UnmanagedType.LPStr)]string name, IntPtr descr, int offset, > [MarshalAs(UnmanagedType.LPStr)]string title); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_DescrNewVoid(IntPtr fields, IntPtr names, int elsize, int flags, int alignment); > > /// > /// Allocates a new VOID descriptor and sets the subarray field as specified. > /// > /// Base descriptor for the subarray > /// Number of dimensions > /// Array of size of each dimension > /// New descriptor object > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_DescrNewSubarray(IntPtr baseDescr, > int ndim, [In][MarshalAs(UnmanagedType.LPArray)]IntPtr[] dims); > > /// > /// Replaces / sets the subarray field of an existing object. > /// > /// Descriptor object to be modified > /// Base descriptor for the subaray > /// Number of dimensions > /// Array of size of each dimension > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_DescrReplaceSubarray(IntPtr descr, IntPtr baseDescr, > int ndim, [In][MarshalAs(UnmanagedType.LPArray)]IntPtr[] dims); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_DescrReplaceFields(IntPtr descr, IntPtr namesArr, IntPtr fieldsDict); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_GetBytes(IntPtr arr, > [Out][MarshalAs(UnmanagedType.LPArray,SizeParamIndex=2)] byte[] bytes, long len, int order); > > // Thread-safe > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > internal static extern IntPtr NpyArrayAccess_ToInterface(IntPtr arr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_ZeroFill(IntPtr arr, IntPtr offset); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_DupZeroElem(IntPtr arr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_Fill(IntPtr arr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static unsafe extern void NpyArrayAccess_CopySwapIn(IntPtr arr, long offset, void* data, int swap); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_ViewLike(IntPtr arr, IntPtr proto); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static unsafe extern void NpyArrayAccess_CopySwapOut(IntPtr arr, long offset, void* data, int swap); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static unsafe extern void NpyArrayAccess_CopySwapScalar(IntPtr dtype, void *dest, void* src, bool swap); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern int NpyArrayAccess_SetDateTimeInfo(IntPtr descr, > [MarshalAs(UnmanagedType.LPStr)]string units, int num, int den, int events); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_InheritDescriptor(IntPtr type, IntPtr conv); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_GetBufferFormatString(IntPtr arr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_Free(IntPtr ptr); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyArrayAccess_FromFile(string fileName, IntPtr dtype, int count, string sep); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern void NpyArrayAccess_SetNamesList(IntPtr dtype, string[] nameslist, int len); > > // Thread-safe > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_DictAllocIter")] > internal static extern IntPtr NpyDict_AllocIter(); > > // Thread-safe > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_DictFreeIter")] > internal static extern void NpyDict_FreeIter(IntPtr iter); > > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl)] > private static extern IntPtr NpyUFuncAccess_UFuncFromPyFunc(int nin, int nout, String funcName, IntPtr pyThunk, IntPtr func); > > /// > /// Accesses the next dictionary item, returning the key and value. Thread-safe when operating across > /// separate iterators; caller must ensure that one iterator is not access simultaneously from two > /// different threads. > /// > /// Pointer to the dictionary object > /// Iterator structure > /// Next key > /// Next value > /// True if an element was returned, false at the end of the sequence > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint="NpyArrayAccess_DictNext")] > internal static extern bool NpyDict_Next(IntPtr dict, IntPtr iter, out IntPtr key, out IntPtr value); > > // Thread-safe > [DllImport("NpyAccessLib", CallingConvention = CallingConvention.Cdecl, EntryPoint = "NpyArrayAccess_FormatLongFloat")] > internal static extern string FormatLongFloat(double v, int precision); > > #endregion > > > #region Callbacks and native access > > /* This structure must match the NpyObject_HEAD structure in npy_object.h > * exactly as it is used to determine the platform-specific offsets. The > * offsets allow the C# code to access these fields directly. */ > [StructLayout(LayoutKind.Sequential)] > internal struct NpyObject_HEAD { > internal IntPtr nob_refcnt; > internal IntPtr nob_type; > internal IntPtr nob_interface; > } > > [StructLayout(LayoutKind.Sequential)] > struct NpyInterface_WrapperFuncs { > internal IntPtr array_new_wrapper; > internal IntPtr iter_new_wrapper; > internal IntPtr multi_iter_new_wrapper; > internal IntPtr neighbor_iter_new_wrapper; > internal IntPtr descr_new_from_type; > internal IntPtr descr_new_from_wrapper; > internal IntPtr ufunc_new_wrapper; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArrayOffsets { > internal int off_magic_number; > internal int off_descr; > internal int off_nd; > internal int off_dimensions; > internal int off_strides; > internal int off_flags; > internal int off_data; > internal int off_base_obj; > internal int off_base_array; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArrayDescrOffsets > { > internal int off_magic_number; > internal int off_kind; > internal int off_type; > internal int off_byteorder; > internal int off_flags; > internal int off_type_num; > internal int off_elsize; > internal int off_alignment; > internal int off_names; > internal int off_subarray; > internal int off_fields; > internal int off_dtinfo; > > /// > /// Offset to the 'offset' field of the NpyArray_DescrField structure. > /// > internal int off_fields_offset; > > /// > /// Offset to the 'descr' field of the NpyArray_DescrField structure. > /// > internal int off_fields_descr; > > /// > /// Offset to the 'title' field of the NpyArray_DescrField structure. > /// > internal int off_fields_title; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArrayIterOffsets > { > internal int off_size; > internal int off_index; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArrayMultiIterOffsets > { > internal int off_numiter; > internal int off_size; > internal int off_index; > internal int off_nd; > internal int off_dimensions; > internal int off_iters; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyArrayIndexInfo { > internal int off_union; > internal int sizeof_index; > internal int max_dims; > } > > [StructLayout(LayoutKind.Sequential)] > internal struct NpyUFuncOffsets > { > internal int off_nin; > internal int off_nout; > internal int off_nargs; > internal int off_identify; > internal int off_ntypes; > internal int off_check_return; > internal int off_name; > internal int off_types; > internal int off_core_signature; > internal int off_core_enabled; > } > > [StructLayout(LayoutKind.Sequential)] > internal class DateTimeInfo { > internal NpyDefs.NPY_DATETIMEUNIT @base; > internal int num; > internal int den; > internal int events; > } > > [StructLayout(LayoutKind.Sequential)] > internal unsafe struct NpyArray_ArrayDescr { > internal IntPtr @base; > internal IntPtr shape_num_dims; > internal IntPtr* shape_dims; > } > > internal static readonly NpyArrayOffsets ArrayOffsets; > internal static readonly NpyArrayDescrOffsets DescrOffsets; > internal static readonly NpyArrayIterOffsets IterOffsets; > internal static readonly NpyArrayMultiIterOffsets MultiIterOffsets; > internal static readonly NpyArrayIndexInfo IndexInfo; > internal static readonly NpyUFuncOffsets UFuncOffsets; > > internal static byte oppositeByteOrder; > > /// > /// Used for synchronizing modifications to interface pointer. > /// > private static object interfaceSyncRoot = new Object(); > > /// > /// Offset to the interface pointer. > /// > internal static int Offset_InterfacePtr = (int)Marshal.OffsetOf(typeof(NpyObject_HEAD), "nob_interface"); > > /// > /// Offset to the reference count in the header structure. > /// > internal static int Offset_RefCount = (int)Marshal.OffsetOf(typeof(NpyObject_HEAD), "nob_refcnt"); > > private static IntPtr lastArrayHandle = IntPtr.Zero; > > /// > /// Given a pointer to a core (native) object, returns the managed wrapper. > /// > /// Address of native object > /// Managed wrapper object > internal static TResult ToInterface(IntPtr ptr) { > if (ptr == IntPtr.Zero) { > return default(TResult); > } > > IntPtr wrapper = Marshal.ReadIntPtr(ptr, (int)Offset_InterfacePtr); > if (wrapper == IntPtr.Zero) { > // The wrapper object is dynamically created for some instances > // so this call into native land triggers that magic. > wrapper = NpyArrayAccess_ToInterface(ptr); > if (wrapper == IntPtr.Zero) { > throw new IronPython.Runtime.Exceptions.RuntimeException( > String.Format("Managed wrapper for type '{0}' is NULL.", typeof(TResult).Name)); > } > } > return (TResult)GCHandleFromIntPtr(wrapper).Target; > } > > > /// > /// Same as ToInterface but releases the core reference. > /// > /// Type of the expected object > /// Pointer to the core object > /// Wrapper instance corresponding to ptr > internal static TResult DecrefToInterface(IntPtr ptr) { > CheckError(); > if (ptr == IntPtr.Zero) { > return default(TResult); > } > TResult result = ToInterface(ptr); > Decref(ptr); > return result; > } > > > /// > /// Allocates a managed wrapper for the passed array object. > /// > /// Pointer to the native array object > /// If true forces base array type, not subtype > /// Not sure how this is used > /// Not used > /// void ** for us to store the allocated wrapper > /// True on success, false on failure > private static int ArrayNewWrapper(IntPtr coreArray, int ensureArray, > int customStrides, IntPtr subtypePtr, IntPtr interfaceData, > IntPtr interfaceRet) { > int success = 1; // Success > > try { > PythonType subtype = null; > object interfaceObj = null; > ndarray wrapArray = null; > > if (interfaceData != IntPtr.Zero) { > interfaceObj = GCHandleFromIntPtr(interfaceData, true).Target; > } > > if (interfaceObj is UseExistingWrapper) { > // Special case for UseExistingWrapper > UseExistingWrapper w = (UseExistingWrapper)interfaceObj; > wrapArray = (ndarray)w.Wrapper; > wrapArray.SetArray(coreArray); > subtype = DynamicHelpers.GetPythonType(wrapArray); > } else { > // Determine the subtype. null means ndarray > if (ensureArray == 0) { > if (subtypePtr != IntPtr.Zero) { > subtype = (PythonType)GCHandleFromIntPtr(subtypePtr).Target; > } else if (interfaceObj != null) { > subtype = DynamicHelpers.GetPythonType(interfaceObj); > } > } > // Create the wrapper > if (subtype != null) { > CodeContext cntx = NpyUtil_Python.DefaultContext; > wrapArray = (ndarray)ObjectOps.__new__(cntx, subtype); > wrapArray.SetArray(coreArray); > } else { > wrapArray = new ndarray(); > wrapArray.SetArray(coreArray); > } > } > > // Call __array_finalize__ for subtypes > if (subtype != null) { > CodeContext cntx = NpyUtil_Python.DefaultContext; > if (PythonOps.HasAttr(cntx, wrapArray, "__array_finalize__")) { > object func = PythonOps.ObjectGetAttribute(cntx, wrapArray, "__array_finalize__"); > if (func != null) { > if (customStrides != 0) { > UpdateFlags(wrapArray, NpyDefs.NPY_UPDATE_ALL); > } > // TODO: Check for a Capsule > PythonCalls.Call(cntx, func, interfaceObj); > } > } > } > > // Write the result > IntPtr ret = GCHandle.ToIntPtr(AllocGCHandle(wrapArray)); > lastArrayHandle = ret; > Marshal.WriteIntPtr(interfaceRet, ret); > ndarray.IncreaseMemoryPressure(wrapArray); > > // TODO: Skipping subtype-specific initialization (ctors.c:718) > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating array wrapper."); > success = 0; > } catch (Exception e) { > Console.WriteLine("Exception while allocating array wrapper: {0}", e.Message); > success = 0; > } > return success; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_ArrayNewWrapper(IntPtr coreArray, int ensureArray, > int customStrides, IntPtr subtypePtr, IntPtr interfaceData, > IntPtr interfaceRet); > > > /// > /// Constructs a new managed wrapper for an interator object. This function > /// is thread-safe. > /// > /// Pointer to the native instance > /// Location to store GCHandle to the wrapper > /// 1 on success, 0 on error > private static int IterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet) { > int success = 1; > > try { > lock (interfaceSyncRoot) { > // Check interfaceRet inside the lock because some interface > // wrappers are dynamically created and two threads could > // trigger these event at the same time. > if (interfaceRet == IntPtr.Zero) { > flatiter wrapIter = new flatiter(coreIter); > interfaceRet = GCHandle.ToIntPtr(AllocGCHandle(wrapIter)); > } > } > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating iterator wrapper."); > success = 0; > } catch (Exception) { > Console.WriteLine("Exception while allocating iterator wrapper."); > success = 0; > } > return success; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_IterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet); > > > > /// > /// Constructs a new managed wrapper for a multi-iterator. This funtion > /// is thread safe. > /// > /// Pointer to the native instance > /// Location to store the wrapper handle > /// > private static int MultiIterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet) { > int success = 1; > try { > lock (interfaceSyncRoot) { > // Check interfaceRet inside the lock because some interface > // wrappers are dynamically created and two threads could > // trigger these event at the same time. > if (interfaceRet == IntPtr.Zero) { > broadcast wrapIter = broadcast.BeingCreated; > interfaceRet = GCHandle.ToIntPtr(AllocGCHandle(wrapIter)); > } > } > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating iterator wrapper."); > success = 0; > } catch (Exception) { > Console.WriteLine("Exception while allocating iterator wrapper."); > success = 0; > } > return success; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_MultiIterNewWrapper(IntPtr coreIter, ref IntPtr interfaceRet); > > > /// > /// Allocated a managed wrapper for one of the core, native types > /// > /// Type code (not used) > /// Pointer to the native descriptor object > /// void** for returning allocated wrapper > /// 1 on success, 0 on error > private static int DescrNewFromType(int type, IntPtr descr, IntPtr interfaceRet) { > int success = 1; > > try { > // TODO: Descriptor typeobj not handled. Do we need to? > > dtype wrap = new dtype(descr, type); > Marshal.WriteIntPtr(interfaceRet, > GCHandle.ToIntPtr(AllocGCHandle(wrap))); > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating descriptor wrapper."); > success = 0; > } catch (Exception) { > Console.WriteLine("Exception while allocating descriptor wrapper."); > success = 0; > } > return success; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_DescrNewFromType(int type, IntPtr descr, IntPtr interfaceRet); > > > > > /// > /// Allocated a managed wrapper for a user defined type > /// > /// Pointer to the base descriptor (not used) > /// Pointer to the native descriptor object > /// void** for returning allocated wrapper > /// 1 on success, 0 on error > private static int DescrNewFromWrapper(IntPtr baseTmp, IntPtr descr, IntPtr interfaceRet) { > int success = 1; > > try { > // TODO: Descriptor typeobj not handled. Do we need to? > > dtype wrap = new dtype(descr); > Marshal.WriteIntPtr(interfaceRet, > GCHandle.ToIntPtr(AllocGCHandle(wrap))); > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating descriptor wrapper."); > success = 0; > } catch (Exception) { > Console.WriteLine("Exception while allocating descriptor wrapper."); > success = 0; > } > return success; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_DescrNewFromWrapper(IntPtr baseTmp, IntPtr descr, IntPtr interfaceRet); > > > > /// > /// Allocated a managed wrapper for a UFunc object. > /// > /// Pointer to the base object > /// void** for returning allocated wrapper > /// 1 on success, 0 on error > private static void UFuncNewWrapper(IntPtr basePtr, IntPtr interfaceRet) { > try { > ufunc wrap = new ufunc(basePtr); > Marshal.WriteIntPtr(interfaceRet, > GCHandle.ToIntPtr(AllocGCHandle(wrap))); > } catch (InsufficientMemoryException) { > Console.WriteLine("Insufficient memory while allocating ufunc wrapper."); > } catch (Exception) { > Console.WriteLine("Exception while allocating ufunc wrapper."); > } > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate void del_UFuncNewWrapper(IntPtr basePtr, IntPtr interfaceRet); > > > /// > /// Accepts a pointer to an existing GCHandle object and allocates > /// an additional GCHandle to the same object. This effectively > /// does an "incref" on the object. Used in cases where an array > /// of objects is being copied. > /// > /// Usually wrapPtr is NULL meaning that we just allocate a new > /// handle and return it. If wrapPtr != NULL then we assign the > /// new handle to it as well. Must be done atomically. > /// > /// Pointer to GCHandle of object to reference > /// Address of the nob_interface field (not value of it) > /// New handle to the input object > private static IntPtr IncrefCallback(IntPtr ptr, IntPtr nobInterfacePtr) { > if (ptr == IntPtr.Zero) { > return IntPtr.Zero; > } > > IntPtr newWrapRef = IntPtr.Zero; > lock (interfaceSyncRoot) { > GCHandle oldWrapRef = GCHandleFromIntPtr(ptr, true); > object wrapperObj = oldWrapRef.Target; > newWrapRef = GCHandle.ToIntPtr(AllocGCHandle(wrapperObj)); > if (nobInterfacePtr != IntPtr.Zero) { > // Replace the contents of nobInterfacePtr with the new reference. > Marshal.WriteIntPtr(nobInterfacePtr, newWrapRef); > FreeGCHandle(oldWrapRef); > } > } > return newWrapRef; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate IntPtr del_Incref(IntPtr ptr, IntPtr wrapPtr); > > /// > /// Releases the reference to the given interface object. Note that > /// this is not a decref but actual freeingo of this handle, it can > /// not be used again. > /// > /// Interface object to 'decref' > private static void DecrefCallback(IntPtr ptr, IntPtr nobInterfacePtr) { > lock (interfaceSyncRoot) { > if (nobInterfacePtr != IntPtr.Zero) { > // Deferencing the interface wrapper. We can't just null the > // wrapPtr because we have to have maintain the link so we > // allocate a weak reference instead. > GCHandle oldWrapRef = GCHandleFromIntPtr(ptr); > Object wrapperObj = oldWrapRef.Target; > Marshal.WriteIntPtr(nobInterfacePtr, > GCHandle.ToIntPtr(AllocGCHandle(wrapperObj, GCHandleType.Weak))); > FreeGCHandle(oldWrapRef); > } else { > if (ptr != IntPtr.Zero) { > FreeGCHandle(GCHandleFromIntPtr(ptr)); > } > } > } > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate void del_Decref(IntPtr ptr, IntPtr wrapPtr); > > > internal static IntPtr GetRefcnt(IntPtr obj) { > // NOTE: I'm relying on the refcnt being first. > return Marshal.ReadIntPtr(obj); > } > > > > #region Error handling > > /// > /// Error type, determines which type of exception to throw. > /// DANGER! Must be kept in sync with npy_api.h > /// > private enum NpyExc_Type { > MemoryError = 0, > IOError, > ValueError, > TypeError, > IndexError, > RuntimeError, > AttributeError, > ComplexWarning, > NotImplementedError, > FloatingPointError, > NoError > } > > > /// > /// Indicates the most recent error code or NpyExc_NoError if nothing pending > /// > [ThreadStatic] > private static NpyExc_Type ErrorCode = NpyExc_Type.NoError; > > /// > /// Stores the most recent error message per-thread > /// > [ThreadStatic] > private static string ErrorMessage = null; > > public static void CheckError() { > if (ErrorCode != NpyExc_Type.NoError) { > NpyExc_Type errTmp = ErrorCode; > String msgTmp = ErrorMessage; > > ErrorCode = NpyExc_Type.NoError; > ErrorMessage = null; > > switch (errTmp) { > case NpyExc_Type.MemoryError: > throw new InsufficientMemoryException(msgTmp); > case NpyExc_Type.IOError: > throw new System.IO.IOException(msgTmp); > case NpyExc_Type.ValueError: > throw new ArgumentException(msgTmp); > case NpyExc_Type.IndexError: > throw new IndexOutOfRangeException(msgTmp); > case NpyExc_Type.RuntimeError: > throw new IronPython.Runtime.Exceptions.RuntimeException(msgTmp); > case NpyExc_Type.AttributeError: > throw new MissingMemberException(msgTmp); > case NpyExc_Type.ComplexWarning: > PythonOps.Warn(NpyUtil_Python.DefaultContext, ComplexWarning, msgTmp); > break; > case NpyExc_Type.TypeError: > throw new IronPython.Runtime.Exceptions.TypeErrorException(msgTmp); > case NpyExc_Type.NotImplementedError: > throw new NotImplementedException(msgTmp); > case NpyExc_Type.FloatingPointError: > throw new IronPython.Runtime.Exceptions.FloatingPointException(msgTmp); > default: > Console.WriteLine("Unhandled exception type {0} in CheckError.", errTmp); > throw new IronPython.Runtime.Exceptions.RuntimeException(msgTmp); > } > } > } > > private static PythonType complexWarning; > > internal static PythonType ComplexWarning { > get { > if (complexWarning == null) { > CodeContext cntx = NpyUtil_Python.DefaultContext; > PythonModule core = (PythonModule)PythonOps.ImportBottom(cntx, "numpy.core", 0); > object tmp; > if (PythonOps.ModuleTryGetMember(cntx, core, "ComplexWarning", out tmp)) { > complexWarning = (PythonType)tmp; > } > } > return complexWarning; > } > } > > private static void SetError(NpyExc_Type exceptType, string msg) { > if (exceptType == NpyExc_Type.ComplexWarning) { > Console.WriteLine("Warning: {0}", msg); > } else { > ErrorCode = exceptType; > ErrorMessage = msg; > } > } > > > /// > /// Called by NpyErr_SetMessage in the native world when something bad happens > /// > /// Type of exception to be thrown > /// Message string > unsafe private static void SetErrorCallback(int exceptType, sbyte* bStr) { > if (exceptType < 0 || exceptType >= (int)NpyExc_Type.NoError) { > Console.WriteLine("Internal error: invalid exception type {0}, likely ErrorType and npyexc_type (npy_api.h) are out of sync.", > exceptType); > } > SetError((NpyExc_Type)exceptType, new string(bStr)); > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > unsafe public delegate void del_SetErrorCallback(int exceptType, sbyte* msg); > > > /// > /// Called by native side to check to see if an error occurred > /// > /// 1 if an error is pending, 0 if not > private static int ErrorOccurredCallback() { > return (ErrorCode != NpyExc_Type.NoError) ? 1 : 0; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate int del_ErrorOccurredCallback(); > > > private static void ClearErrorCallback() { > ErrorCode = NpyExc_Type.NoError; > ErrorMessage = null; > } > [UnmanagedFunctionPointer(CallingConvention.Cdecl)] > public delegate void del_ClearErrorCallback(); > > private static unsafe void GetErrorState(int* bufsizep, int* errmaskp, IntPtr* errobjp) { > // deref any existing obj > if (*errobjp != IntPtr.Zero) { > FreeGCHandle(GCHandleFromIntPtr(*errobjp)); > *errobjp = IntPtr.Zero; > } > var info = umath.errorInfo; > if (info == null) { > *bufsizep = NpyDefs.NPY_BUFSIZE; > *errmaskp = NpyDefs.NPY_UFUNC_ERR_DEFAULT; > *errobjp = IntPtr.Zero; > } else { > umath.ErrorInfo vInfo = (umath.ErrorInfo)info; > *bufsizep = vInfo.bufsize; > *errmaskp = vInfo.errmask; > if (vInfo.errobj != null) { > GCHandle h = AllocGCHandle(vInfo.errobj); > *errobjp = GCHandle.ToIntPtr(h); > } > } > } > > private static unsafe void ErrorHandler(sbyte* name, int errormask, IntPtr errobj, int retstatus, int* first) { > try { > object obj; > if (errobj != IntPtr.Zero) { > obj = GCHandleFromIntPtr(errobj).Target; > } else { > obj = null; > } > string sName = new string(name); > NpyDefs.NPY_UFUNC_ERR method; > if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.DIVIDEBYZERO) != 0) { > bool bfirst = (*first != 0); > int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.DIVIDEBYZERO); > method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.DIVIDEBYZERO); > umath.ErrorHandler(sName, method, obj, "divide by zero", retstatus, ref bfirst); > *first = bfirst ? 1 : 0; > } > if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.OVERFLOW) != 0) { > bool bfirst = (*first != 0); > int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.OVERFLOW); > method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.OVERFLOW); > umath.ErrorHandler(sName, method, obj, "overflow", retstatus, ref bfirst); > *first = bfirst ? 1 : 0; > } > if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.UNDERFLOW) != 0) { > bool bfirst = (*first != 0); > int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.UNDERFLOW); > method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.UNDERFLOW); > umath.ErrorHandler(sName, method, obj, "underflow", retstatus, ref bfirst); > *first = bfirst ? 1 : 0; > } > if ((retstatus & (int)NpyDefs.NPY_UFUNC_FPE.INVALID) != 0) { > bool bfirst = (*first != 0); > int handle = (errormask & (int)NpyDefs.NPY_UFUNC_MASK.INVALID); > method = (NpyDefs.NPY_UFUNC_ERR)(handle >> (int)NpyDefs.NPY_UFUNC_SHIFT.INVALID); > umath.ErrorHandler(sName, method, obj, "invalid", retstatus, ref bfirst); > *first = bfirst ? 1 : 0; > } > } catch (Exception ex) { > SetError(NpyExc_Type.FloatingPointError, ex.Message); > } > } > > #endregion > > #region Thread handling > // CPython uses a threading model that is single threaded unless the global interpreter lock > // is explicitly released. While .NET supports true threading, the ndarray core has not been > // completely checked to makes sure that it is re-entrant much less modify each function to > // perform fine-grained locking on individual objects. Thus we artificially lock IronPython > // down and force ndarray accesses to be single threaded for now. > > /// > /// Equivalent to the CPython GIL. > /// > private static readonly object GlobalIterpLock = new object(); > > /// > /// Releases the GIL so other threads can run. > /// > /// Return value is unused > private static IntPtr EnableThreads() { > Monitor.Exit(GlobalIterpLock); > return IntPtr.Zero; > } > private delegate IntPtr del_EnableThreads(); > > /// > /// Re-acquires the GIL forcing code to stop until other threads have exited the ndarray core. > /// > /// Unused > private static void DisableThreads(IntPtr unused) { > Monitor.Enter(GlobalIterpLock); > } > private delegate void del_DisableThreads(IntPtr unused); > > #endregion > > // > // These variables hold a reference to the delegates passed into the core. > // Failure to hold these references causes the callback function to disappear > // at some point when the GC runs. > // > private static readonly NpyInterface_WrapperFuncs wrapFuncs; > > private static readonly del_ArrayNewWrapper ArrayNewWrapDelegate = > new del_ArrayNewWrapper(ArrayNewWrapper); > private static readonly del_IterNewWrapper IterNewWrapperDelegate = > new del_IterNewWrapper(IterNewWrapper); > private static readonly del_MultiIterNewWrapper MultiIterNewWrapperDelegate = > new del_MultiIterNewWrapper(MultiIterNewWrapper); > private static readonly del_DescrNewFromType DescrNewFromTypeDelegate = > new del_DescrNewFromType(DescrNewFromType); > private static readonly del_DescrNewFromWrapper DescrNewFromWrapperDelegate = > new del_DescrNewFromWrapper(DescrNewFromWrapper); > private static readonly del_UFuncNewWrapper UFuncNewWrapperDelegate = > new del_UFuncNewWrapper(UFuncNewWrapper); > > private static readonly del_Incref IncrefCallbackDelegate = > new del_Incref(IncrefCallback); > private static readonly del_Decref DecrefCallbackDelegate = > new del_Decref(DecrefCallback); > unsafe private static readonly del_SetErrorCallback SetErrorCallbackDelegate = > new del_SetErrorCallback(SetErrorCallback); > private static readonly del_ErrorOccurredCallback ErrorOccurredCallbackDelegate = > new del_ErrorOccurredCallback(ErrorOccurredCallback); > private static readonly del_ClearErrorCallback ClearErrorCallbackDelegate = > new del_ClearErrorCallback(ClearErrorCallback); > private static readonly del_EnableThreads EnableThreadsDelegate = > new del_EnableThreads(EnableThreads); > private static readonly del_DisableThreads DisableThreadsDelegate = > new del_DisableThreads(DisableThreads); > > private static unsafe readonly del_GetErrorState GetErrorStateDelegate = new del_GetErrorState(GetErrorState); > private static unsafe readonly del_ErrorHandler ErrorHandlerDelegate = new del_ErrorHandler(ErrorHandler); > > > /// > /// The native type code that matches up to a 32-bit int. > /// > internal static readonly NpyDefs.NPY_TYPES TypeOf_Int32; > > /// > /// Native type code that matches up to a 64-bit int. > /// > internal static readonly NpyDefs.NPY_TYPES TypeOf_Int64; > > /// > /// Native type code that matches up to a 32-bit unsigned int. > /// > internal static readonly NpyDefs.NPY_TYPES TypeOf_UInt32; > > /// > /// Native type code that matches up to a 64-bit unsigned int. > /// > internal static readonly NpyDefs.NPY_TYPES TypeOf_UInt64; > > /// > /// Size of element in integer arrays, in bytes. > /// > internal static readonly int Native_SizeOfInt; > > /// > /// Size of element in long arrays, in bytes. > /// > internal static readonly int Native_SizeOfLong; > > /// > /// Size of element in long long arrays, in bytes. > /// > internal static readonly int Native_SizeOfLongLong; > > /// > /// Size fo element in long double arrays, in bytes. > /// > internal static readonly int Native_SizeOfLongDouble; > > > /// > /// Initializes the core library with necessary callbacks on load. > /// > static NpyCoreApi() { > try { > // Check the native byte ordering (make sure it matches what .NET uses) and > // figure out the mapping between types that vary in size in the core and > // fixed-size .NET types. > int intSize, longSize, longLongSize, longDoubleSize; > oppositeByteOrder = GetNativeTypeInfo(out intSize, out longSize, out longLongSize, > out longDoubleSize); > > Native_SizeOfInt = intSize; > Native_SizeOfLong = longSize; > Native_SizeOfLongLong = longLongSize; > Native_SizeOfLongDouble = longDoubleSize; > > // Important: keep this consistent with NpyArray_TypestrConvert in npy_conversion_utils.c > if (intSize == 4 && longSize == 4 && longLongSize == 8) { > TypeOf_Int32 = NpyDefs.NPY_TYPES.NPY_LONG; > TypeOf_Int64 = NpyDefs.NPY_TYPES.NPY_LONGLONG; > TypeOf_UInt32 = NpyDefs.NPY_TYPES.NPY_ULONG; > TypeOf_UInt64 = NpyDefs.NPY_TYPES.NPY_ULONGLONG; > } else if (intSize == 4 && longSize == 8 && longLongSize == 8) { > TypeOf_Int32 = NpyDefs.NPY_TYPES.NPY_INT; > TypeOf_Int64 = NpyDefs.NPY_TYPES.NPY_LONG; > TypeOf_UInt32 = NpyDefs.NPY_TYPES.NPY_UINT; > TypeOf_UInt64 = NpyDefs.NPY_TYPES.NPY_ULONG; > } else { > throw new NotImplementedException( > String.Format("Unimplemented combination of native type sizes: int = {0}b, long = {1}b, longlong = {2}b", > intSize, longSize, longLongSize)); > } > > > wrapFuncs = new NpyInterface_WrapperFuncs(); > > wrapFuncs.array_new_wrapper = > Marshal.GetFunctionPointerForDelegate(ArrayNewWrapDelegate); > wrapFuncs.iter_new_wrapper = > Marshal.GetFunctionPointerForDelegate(IterNewWrapperDelegate); > wrapFuncs.multi_iter_new_wrapper = > Marshal.GetFunctionPointerForDelegate(MultiIterNewWrapperDelegate); > wrapFuncs.neighbor_iter_new_wrapper = IntPtr.Zero; > wrapFuncs.descr_new_from_type = > Marshal.GetFunctionPointerForDelegate(DescrNewFromTypeDelegate); > wrapFuncs.descr_new_from_wrapper = > Marshal.GetFunctionPointerForDelegate(DescrNewFromWrapperDelegate); > wrapFuncs.ufunc_new_wrapper = > Marshal.GetFunctionPointerForDelegate(UFuncNewWrapperDelegate); > > int s = Marshal.SizeOf(wrapFuncs.descr_new_from_type); > > NumericOps.NpyArray_FunctionDefs funcDefs = NumericOps.GetFunctionDefs(); > IntPtr funcDefsHandle = IntPtr.Zero; > IntPtr wrapHandle = IntPtr.Zero; > try { > funcDefsHandle = Marshal.AllocHGlobal(Marshal.SizeOf(funcDefs)); > Marshal.StructureToPtr(funcDefs, funcDefsHandle, true); > wrapHandle = Marshal.AllocHGlobal(Marshal.SizeOf(wrapFuncs)); > Marshal.StructureToPtr(wrapFuncs, wrapHandle, true); > > npy_initlib(funcDefsHandle, wrapHandle, > Marshal.GetFunctionPointerForDelegate(SetErrorCallbackDelegate), > Marshal.GetFunctionPointerForDelegate(ErrorOccurredCallbackDelegate), > Marshal.GetFunctionPointerForDelegate(ClearErrorCallbackDelegate), > Marshal.GetFunctionPointerForDelegate(NumericOps.ComparePriorityDelegate), > Marshal.GetFunctionPointerForDelegate(IncrefCallbackDelegate), > Marshal.GetFunctionPointerForDelegate(DecrefCallbackDelegate), > IntPtr.Zero, IntPtr.Zero); > // for now we run full threaded, no safety net. > //Marshal.GetFunctionPointerForDelegate(EnableThreadsDelegate), > //Marshal.GetFunctionPointerForDelegate(DisableThreadsDelegate)); > } catch (Exception e) { > Console.WriteLine("Failed during initialization: {0}", e); > } finally { > Marshal.FreeHGlobal(funcDefsHandle); > Marshal.FreeHGlobal(wrapHandle); > } > > // Initialize the offsets to each structure type for fast access > // TODO: Not a great way to do this, but for now it's > // a convenient way to get hard field offsets from the core. > ArrayGetOffsets(out ArrayOffsets.off_magic_number, > out ArrayOffsets.off_descr, > out ArrayOffsets.off_nd, > out ArrayOffsets.off_dimensions, > out ArrayOffsets.off_strides, > out ArrayOffsets.off_flags, > out ArrayOffsets.off_data, > out ArrayOffsets.off_base_obj, > out ArrayOffsets.off_base_array); > > DescrGetOffsets(out DescrOffsets.off_magic_number, > out DescrOffsets.off_kind, > out DescrOffsets.off_type, > out DescrOffsets.off_byteorder, > out DescrOffsets.off_flags, > out DescrOffsets.off_type_num, > out DescrOffsets.off_elsize, > out DescrOffsets.off_alignment, > out DescrOffsets.off_names, > out DescrOffsets.off_subarray, > out DescrOffsets.off_fields, > out DescrOffsets.off_dtinfo, > out DescrOffsets.off_fields_offset, > out DescrOffsets.off_fields_descr, > out DescrOffsets.off_fields_title); > > IterGetOffsets(out IterOffsets.off_size, > out IterOffsets.off_index); > > MultiIterGetOffsets(out MultiIterOffsets.off_numiter, > out MultiIterOffsets.off_size, > out MultiIterOffsets.off_index, > out MultiIterOffsets.off_nd, > out MultiIterOffsets.off_dimensions, > out MultiIterOffsets.off_iters); > > GetIndexInfo(out IndexInfo.off_union, out IndexInfo.sizeof_index, out IndexInfo.max_dims); > > UFuncGetOffsets(out UFuncOffsets.off_nin, out UFuncOffsets.off_nout, > out UFuncOffsets.off_nargs, out UFuncOffsets.off_core_enabled, > out UFuncOffsets.off_identify, out UFuncOffsets.off_ntypes, > out UFuncOffsets.off_check_return, out UFuncOffsets.off_name, > out UFuncOffsets.off_types, out UFuncOffsets.off_core_signature); > > NpyUFunc_SetFpErrFuncs(GetErrorStateDelegate, ErrorHandlerDelegate); > > // Causes the sort functions to be registered with the type descriptor objects. > NpyArray_InitSortModule(); > } catch (Exception e) { > // Report any details that we can here because IronPython only reports > // that the static type initializer failed. > Console.WriteLine("Failed while initializing NpyCoreApi: {0}:{1}", e.GetType().Name, e.Message); > Console.WriteLine("NumpyDotNet stack trace:\n{0}", e.StackTrace); > throw e; > } > } > #endregion > > > #region Memory verification > > // Turns on/off verification of native memory handles. This functionality adds substantial runtime > // overhead but can be invaluable in tracking down accesses of freed pointers and other faults. > #if DEBUG > private const bool CheckMemoryAccesses = true; > #else > private const bool CheckMemoryAccesses = false; > #endif > > /// > /// Set of all currently allocated GCHandles and the type of handle. > /// > private static readonly Dictionary AllocatedHandles = new Dictionary(); > > /// > /// Set of freed GC handles that we should not be accessing. > /// > private static readonly HashSet FreedHandles = new HashSet(); > > /// > /// Allocates a GCHandle for a given object. If CheckMemoryAccesses is false, > /// this is inlined into the normal GCHandle call. If not, it performs the > /// access checking. > /// > /// Object to get a handle to > /// Handle type, default is normal > /// GCHandle instance > internal static GCHandle AllocGCHandle(Object o, GCHandleType type=GCHandleType.Normal) { > GCHandle h = GCHandle.Alloc(o, type); > if (CheckMemoryAccesses) { > lock (AllocatedHandles) { > IntPtr p = GCHandle.ToIntPtr(h); > if (AllocatedHandles.ContainsKey(p)) { > throw new AccessViolationException( > String.Format("Internal error: detected duplicate allocation of GCHandle. Probably a bookkeeping error. Handle is {0}.", > p)); > } > if (FreedHandles.Contains(p)) { > FreedHandles.Remove(p); > } > AllocatedHandles.Add(p, type); > } > } > return h; > } > > /// > /// Verifies that a GCHandle is known and good prior to using it. If > /// CheckMemoryAccesses is false, this is a no-op and goes away. > /// > /// Handle to verify > internal static GCHandle GCHandleFromIntPtr(IntPtr p, bool weakOk=false) { > if (CheckMemoryAccesses) { > lock (AllocatedHandles) { > GCHandleType handleType; > if (FreedHandles.Contains(p)) { > throw new AccessViolationException( > String.Format("Internal error: accessing already freed GCHandle {0}.", p)); > } > if (!AllocatedHandles.TryGetValue(p, out handleType)) { > throw new AccessViolationException( > String.Format("Internal error: attempt to access unknown GCHandle {0}.", p)); > } // else if (handleType == GCHandleType.Weak && !weakOk) { > // throw new AccessViolationException( > // String.Format("Internal error: invalid attempt to access weak reference {0}.", p)); > //} > } > } > return GCHandle.FromIntPtr(p); > } > > /// > /// Releases a GCHandle instance for an object. If CheckMemoryAccesses is > /// false this is inlined to the GCHandle.Free() method. Otherwise it verifies > /// that the handle is legit. > /// > /// GCHandle to release > internal static void FreeGCHandle(GCHandle h) { > if (CheckMemoryAccesses) { > lock (AllocatedHandles) { > IntPtr p = GCHandle.ToIntPtr(h); > if (FreedHandles.Contains(p)) { > throw new AccessViolationException( > String.Format("Internal error: freeing already freed GCHandle {0}.", p)); > } > if (!AllocatedHandles.ContainsKey(p)) { > throw new AccessViolationException( > String.Format("Internal error: freeing unknown GCHandle {0}.", p)); > } > AllocatedHandles.Remove(p); > FreedHandles.Add(p); > } > } > h.Free(); > } > > #endregion > } > } diff -r pure-numpy/src/numpy/NumpyDotNet/NpyDefs.cs numpy-refactor/numpy/NumpyDotNet/NpyDefs.cs 6c6 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { diff -r pure-numpy/src/numpy/NumpyDotNet/NpyDescr.cs numpy-refactor/numpy/NumpyDotNet/NpyDescr.cs 13c13 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { diff -r pure-numpy/src/numpy/NumpyDotNet/NpyIndexes.cs numpy-refactor/numpy/NumpyDotNet/NpyIndexes.cs 8c8 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/NpyUtils.cs numpy-refactor/numpy/NumpyDotNet/NpyUtils.cs 13,14c13 < namespace Cascade.VTFA.Python.Numpy < { --- > namespace NumpyDotNet { 231c230 < } catch (Exception) { } --- > } catch (Exception e) { } diff -r pure-numpy/src/numpy/NumpyDotNet/NumericOps.cs numpy-refactor/numpy/NumpyDotNet/NumericOps.cs 17c17 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { 518c518 < } catch (ArgumentException) { --- > } catch (ArgumentException e) { diff -r pure-numpy/src/numpy/NumpyDotNet/NumpyDotNet.csproj numpy-refactor/numpy/NumpyDotNet/NumpyDotNet.csproj 1,209c1,202 < ??? < < < Debug < AnyCPU < 8.0.30703 < 2.0 < {9D8FA516-085C-40B2-93CA-F3A419B2FCED} < Library < Properties < Cascade.VTFA.Python.Numpy < Numpy < v4.0 < 512 < SAK < SAK < SAK < SAK < < < true < full < false < bin\ < DEBUG;TRACE < prompt < 4 < true < MinimumRecommendedRules.ruleset < < < pdbonly < true < bin\ < TRACE < prompt < 4 < true < < < true < bin\ < DEBUG;TRACE < true < full < AnyCPU < bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < false < < < true < bin\ < DEBUG;TRACE < true < full < AnyCPU < bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < true < < < true < bin\x64\Debug\ < DEBUG;TRACE < true < full < x64 < bin\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < < < bin\x64\Release\ < TRACE < true < true < pdbonly < x64 < bin\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < false < < < true < bin\x64\Debug_Install\ < DEBUG;TRACE < true < full < x64 < bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < false < < < true < bin\x64\Release_Install\ < DEBUG;TRACE < true < true < full < x64 < bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml < true < GlobalSuppressions.cs < prompt < MinimumRecommendedRules.ruleset < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets < false < ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules < false < false < < < < False < ..\..\..\PythonConsoleControl\RequiredLibraries\IronPython\IronPython.dll < < < False < ..\..\..\PythonConsoleControl\RequiredLibraries\IronPython\IronPython.Modules.dll < < < False < ..\..\..\PythonConsoleControl\RequiredLibraries\IronPython\Microsoft.Dynamic.dll < < < False < ..\..\..\PythonConsoleControl\RequiredLibraries\IronPython\Microsoft.Scripting.dll < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < "$(IRONPYTHON_HOME)\ipy.exe" "$(SolutionDir)\PythonNumPy\iron_install.py" "$(SolutionDir)\PythonNumPy\\" "$(SolutionDir)\VTFA\$(OutDir)\" < < < < < --- > ??? > > > Debug > AnyCPU > 8.0.30703 > 2.0 > {9D8FA516-085C-40B2-93CA-F3A419B2FCED} > Library > Properties > NumpyDotNet > NumpyDotNet > v4.0 > 512 > > > true > full > false > bin\ > DEBUG;TRACE > prompt > 4 > true > MinimumRecommendedRules.ruleset > > > pdbonly > true > bin\ > TRACE > prompt > 4 > true > > > true > bin\ > DEBUG;TRACE > true > full > AnyCPU > bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > false > > > true > bin\ > DEBUG;TRACE > true > full > AnyCPU > bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > true > > > true > bin\x64\Debug\ > DEBUG;TRACE > true > full > x64 > bin\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > > > bin\x64\Release\ > TRACE > true > true > pdbonly > x64 > bin\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > false > > > true > bin\x64\Debug_Install\ > DEBUG;TRACE > true > full > x64 > bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > false > > > true > bin\x64\Release_Install\ > DEBUG;TRACE > true > true > full > x64 > bin\Debug\NumpyDotNet.dll.CodeAnalysisLog.xml > true > GlobalSuppressions.cs > prompt > MinimumRecommendedRules.ruleset > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets > false > ;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules > false > false > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > PreserveNewest > > > PreserveNewest > > > > > "$(IRONPYTHON_HOME)\ipy.exe" "$(ProjectDir)\..\..\iron_install.py" "$(TargetDir)" > > > > > > 216c209 < --> --- > --> Only in pure-numpy/src/numpy/NumpyDotNet: NumpyDotNet.csproj.user Only in pure-numpy/src/numpy/NumpyDotNet: NumpyDotNet.csproj.vspscc Only in numpy-refactor/numpy/NumpyDotNet: NumpyDotNet.sln Only in pure-numpy/src/numpy/NumpyDotNet: obj diff -r pure-numpy/src/numpy/NumpyDotNet/Properties/AssemblyInfo.cs numpy-refactor/numpy/NumpyDotNet/Properties/AssemblyInfo.cs 7,8c7,8 < // associated with an assembly. < [assembly: AssemblyTitle("VTFA Python Support Numpy")] --- > // associated with an assembly. > [assembly: AssemblyTitle("DotNetInterface")] 10,12c10,12 < [assembly: AssemblyConfiguration("")] < [assembly: AssemblyCompany("Cascade Acoustic Research")] < [assembly: AssemblyProduct("VTFA Python Support")] --- > [assembly: AssemblyConfiguration("")] > [assembly: AssemblyCompany("Scipy / Numpy Community")] > [assembly: AssemblyProduct("DotNetInterface")] diff -r pure-numpy/src/numpy/NumpyDotNet/Scalar.cs numpy-refactor/numpy/NumpyDotNet/Scalar.cs 12c12 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet 1357,1358c1357,1358 < internal new static readonly int MinValue = Int32.MinValue; < internal new static readonly int MaxValue = Int32.MaxValue; --- > internal static readonly int MinValue = Int32.MinValue; > internal static readonly int MaxValue = Int32.MaxValue; diff -r pure-numpy/src/numpy/NumpyDotNet/ScalarMathModule.cs numpy-refactor/numpy/NumpyDotNet/ScalarMathModule.cs 15c15 < namespace Cascade.VTFA.Python.Numpy { --- > namespace NumpyDotNet { diff -r pure-numpy/src/numpy/NumpyDotNet/shape.cs numpy-refactor/numpy/NumpyDotNet/shape.cs 7c7 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet Only in numpy-refactor/numpy/NumpyDotNet: tests diff -r pure-numpy/src/numpy/NumpyDotNet/ufunc.cs numpy-refactor/numpy/NumpyDotNet/ufunc.cs 11a12 > using NumpyDotNet; 13c14 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/umath.cs numpy-refactor/numpy/NumpyDotNet/umath.cs 14c14 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet diff -r pure-numpy/src/numpy/NumpyDotNet/Wrapper.cs numpy-refactor/numpy/NumpyDotNet/Wrapper.cs 6c6 < namespace Cascade.VTFA.Python.Numpy --- > namespace NumpyDotNet Only in pure-numpy/src/numpy/: numpy.vpj Only in pure-numpy/src/numpy/: numpy.vpw Only in pure-numpy/src/numpy/: numpy.vpwhist Only in pure-numpy/src/numpy/: numpy.vtg diff -r pure-numpy/src/numpy/oldnumeric/__init__.py numpy-refactor/numpy/oldnumeric/__init__.py 41a42,45 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/oldnumeric: tests diff -r pure-numpy/src/numpy/polynomial/__init__.py numpy-refactor/numpy/polynomial/__init__.py 19a20,23 > > from numpy.testing import Tester > test = Tester(__file__).test > bench = Tester(__file__).bench Only in numpy-refactor/numpy/polynomial: tests Only in numpy-refactor/numpy/: random Only in numpy-refactor/numpy/: setup.py Only in numpy-refactor/numpy/: setupscons.py Only in numpy-refactor/numpy/: testing Only in numpy-refactor/numpy/: tests From lukre at microsoft.com Sat Jun 7 01:42:00 2014 From: lukre at microsoft.com (Lutz Kretzschmar) Date: Fri, 6 Jun 2014 23:42:00 +0000 Subject: [Ironpython-users] Installation question Message-ID: Hi there, I was wondering whether it is possible to use IronPython without installing it. I would like to xcopy-deploy it and was hoping there is some mechanism the CreateEngine() can use to point it at the deployed IronPython. Regards, - Lutz -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Tue Jun 10 11:32:12 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Tue, 10 Jun 2014 11:32:12 +0200 Subject: [Ironpython-users] Embedding obspy and dependencies In-Reply-To: References: Message-ID: I briefly checked what obspy is and as expected, main dependency is numpy Your first step would be to build numpy yourself: https://github.com/numpy/numpy-refactor/wiki/Recompile The last beta contains a quite a few cpython/python compatibility fixes. Please, use it: https://ironpython.codeplex.com/releases/view/115611 The version of numpy is not the newest, so you may discover some missing bits. Very likely you should install obspy from source: ipy setup.py install There are well known compatibility issues around string/byte/unicode. If you get that far, please report them here, very likely we can help you with workaround. --pawel On Tue, Jun 3, 2014 at 11:27 PM, Michael Powell wrote: > Hello, > > I am doing some seismic work and would like to embed obspy and > required dependencies in an IronPython-based C# .NET assembly. > > In and of itself, IronPython is straightforward enough. > > What I am not so clear on is how to configure libraries, dependencies, > and so on. For instance, we could start from readily available > binaries, but would we have difficulty with supported Python runtime > versions, things of this nature. > > Would anyone care to comment here? Or has anyone done this type of > thing already? Or even specifically working with obspy? > > Thank you. > > Best regards, > > Michael Powell > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Tue Jun 10 15:25:17 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 10 Jun 2014 14:25:17 +0100 Subject: [Ironpython-users] launcher cp#35064 In-Reply-To: References: Message-ID: On Fri, Jun 6, 2014 at 6:47 PM, Pawel Jasinski wrote: > we have a moded version of pylauncher > https://gist.github.com/paweljasinski/5e0b0b59648c6f85c489 as described > in https://ironpython.codeplex.com/workitem/35064 > Vernon agreed to give it a spin. > Very nice! Did the C code have to change at all? If so, is there an issue in the CPython tracker to get the changes included upstream? > > What I am trying to figure out is distribution and ip integration. > I would be very happy if it could make to 2.7.5 msi. > Is it realistic? > I will hold 2.7.5 until this is integrated. > Should I just add a cproject to ironlanguages? > For now I think we can include it in the main repo if we have to, but I'd prefer not. > > I am also not particularly skilled with Wix magic, can anybody help? > Much of cmd.exe batch and msbuild, I'm reasonably skilled in the arkane magicks of languages no sane person would ever use. :) The difficulty will be in making sure that only one copy of the Python launcher is installed. I'll have to decompile the CPython MSI and brush up on component rules to make sure that if we need an updated binary it gets installed, and that IP doesn't overwrite any other changes (it just merges them with the existing ones). - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Tue Jun 10 15:31:53 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 10 Jun 2014 14:31:53 +0100 Subject: [Ironpython-users] Installation question In-Reply-To: References: Message-ID: On Sat, Jun 7, 2014 at 12:42 AM, Lutz Kretzschmar wrote: > Hi there, > > > > I was wondering whether it is possible to use IronPython without > installing it. I would like to xcopy-deploy it and was hoping there is some > mechanism the CreateEngine() can use to point it at the deployed IronPython. > All IronPython releases (http://ironpython.codeplex.com/releases/view/90087) have a zip archive available that contains everything the installer does. If you're embedding, please use the assemblies in the Platforms directory. Python.CreateEngine() will use whatever version is referenced based on the usual .NET search paths and assembly resolution rules. You can also use NuGet if that works in your environment. One catch is that, because the assemblies are strong-named and the installer puts them in the GAC, you may get versioning issues on machines that have a different version installed. 2.7.5 will mitigate this and 3.0 will fix it entirely. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Tue Jun 10 15:49:53 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 10 Jun 2014 14:49:53 +0100 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Tue, Jun 3, 2014 at 1:39 PM, Doug Blank wrote: > 2) A pure-Python version would be a lot of work (perhaps building on > PyPy's RPython version and converting their C) and be slow, but would be > little maintenance as most of the details for the current version of numpy > would be static. Requires generic Python skills to develop (a large group > of people have these skills; any generic Python implementation could use). > Anything that we can share with the PyPy/Jython teams will be a huge advantage. Having a pure-Python implementation would at least allow it to work, even if it wouldn't be fast (except on PyPy). It might also be a huge amount of work, but if it's shared between three smallish communities it might not be too bad. I just don't know enough about the architecture of numpy to say for sure - which parts are Python, which are Cython, and which are hand-written CPython modules. If they're going to depend on Cython, it's worth investing in a C# backend for that (unless we can generate cross-platform C++, but .NET Native might make that irrelevant anyway). It would also enable the wonderful Pandas library, hopefully. Getting an idea from the numpy team where they're headed would be a good idea before investing too much work in any case. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Tue Jun 10 15:53:22 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 10 Jun 2014 14:53:22 +0100 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: On Tue, Jun 3, 2014 at 1:53 PM, Olof Bjarnason wrote: > Why isn't CPython+NumPy+SciPy (or what you need on top of NumPy) > enough? It's been tested and maintained for a long time, and works > quite well? > For standalone stuff, yes. But if I'm building an application in .NET that I want users to be able to script using IronPython, and those users might want to do some heavy-duty math (say, a 3D modeling application), it makes sense for numpy to be available directly to IronPython. > It does seem like a daunting task to try and build and maintain > something separate from the mainline NumPy/SciPy community... > Exactly, which is why I would prefer to see use work as closely as possible with them and the PyPy team (remembering that those two didn't work so closely in the early days of NumPyPy...) to reduce the workload. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.schaber at codesys.com Tue Jun 10 17:03:01 2014 From: m.schaber at codesys.com (Markus Schaber) Date: Tue, 10 Jun 2014 15:03:01 +0000 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: References: Message-ID: <727D8E16AE957149B447FE368139F2B539BE1DF0@SERVER10> Hi, Jeff, Von: Jeff Hardy > On Tue, Jun 3, 2014 at 1:39 PM, Doug Blank wrote: >> 2) A pure-Python version would be a lot of work (perhaps building on PyPy's RPython version and converting their C) and be slow, but would be little maintenance as most of the details for the current version of numpy would be static. Requires generic Python skills to develop (a large group of people have these skills; any generic Python implementation could use). > Anything that we can share with the PyPy/Jython teams will be a huge advantage. Having a pure-Python implementation would at least allow it to work, even if it wouldn't be fast (except on PyPy). It might also be a huge amount of work, but if it's shared between three smallish communities it might not be too bad. I think one of the problems is to avoid the boxing of doubles to objects when they're stored in arrays / vectors. > I just don't know enough about the architecture of numpy to say for sure - which parts are Python, which are Cython, and which are hand-written CPython modules. If they're going to depend on Cython, it's worth investing in a C# backend for that (unless we can generate cross-platform C++, but .NET Native might make that irrelevant anyway). It would also enable the wonderful Pandas library, hopefully. Getting an idea from the numpy team where they're headed would be a good idea before investing too much work in any case. As far as I can see, IronPython is JITted after being translated by the DLR. Maybe we could enhance that process to involve some type inference (similar to RPython) where possible, and to use Cython style type hints. With the appropriate type mappings (and possibly some hints), this could allow an "pure" Python version to get full .NET speed (e. G. by using .NET arrays of .NET double instead of python arrays). Best regards Markus Schaber CODESYS? a trademark of 3S-Smart Software Solutions GmbH Inspiring Automation Solutions ________________________________________ 3S-Smart Software Solutions GmbH Dipl.-Inf. Markus Schaber | Product Development Core Technology Memminger Str. 151 | 87439 Kempten | Germany Tel. +49-831-54031-979 | Fax +49-831-54031-50 E-Mail: m.schaber at codesys.com | Web: codesys.com | CODESYS store: store.codesys.com CODESYS forum: forum.codesys.com Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade register: Kempten HRB 6186 | Tax ID No.: DE 167014915 From jdhardy at gmail.com Tue Jun 10 23:38:04 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 10 Jun 2014 22:38:04 +0100 Subject: [Ironpython-users] numpy in IronPython In-Reply-To: <727D8E16AE957149B447FE368139F2B539BE1DF0@SERVER10> References: <727D8E16AE957149B447FE368139F2B539BE1DF0@SERVER10> Message-ID: On Tue, Jun 10, 2014 at 4:03 PM, Markus Schaber wrote: > Hi, Jeff, > > Von: Jeff Hardy > > On Tue, Jun 3, 2014 at 1:39 PM, Doug Blank wrote: > >> 2) A pure-Python version would be a lot of work (perhaps building on > PyPy's RPython version and converting their C) and be slow, but would be > little maintenance as most of the details for the current version of numpy > would be static. Requires generic Python skills to develop (a large group > of people have these skills; any generic Python implementation could use). > > > Anything that we can share with the PyPy/Jython teams will be a huge > advantage. Having a pure-Python implementation would at least allow it to > work, even if it wouldn't be fast (except on PyPy). It might also be a huge > amount of work, but if it's shared between three smallish communities it > might not be too bad. > > I think one of the problems is to avoid the boxing of doubles to objects > when they're stored in arrays / vectors. > That's an improvement I would like to see in general. I think IP tries to do that sometimes, but not in the general case. It should be possible for range(10) to return a IList and have it be transparently switched to a IList is something else is added. PyPy does something similar, assuming lists (and dicts IIRC) are homogeneous until proven otherwise. Or maybe it's a case of having the algorithms in Python and the data structures in C#. > > > I just don't know enough about the architecture of numpy to say for sure > - which parts are Python, which are Cython, and which are hand-written > CPython modules. If they're going to depend on Cython, it's worth investing > in a C# backend for that (unless we can generate cross-platform C++, but > .NET Native might make that irrelevant anyway). It would also enable the > wonderful Pandas library, hopefully. Getting an idea from the numpy team > where they're headed would be a good idea before investing too much work in > any case. > > As far as I can see, IronPython is JITted after being translated by the > DLR. Maybe we could enhance that process to involve some type inference > (similar to RPython) where possible, and to use Cython style type hints. > > With the appropriate type mappings (and possibly some hints), this could > allow an "pure" Python version to get full .NET speed (e. G. by using .NET > arrays of .NET double instead of python arrays). > That's what I'd prefer to see as well. It's more engineering work but I think the payoff would be bigger in general. It also allows to start with a slow-but-API-compatible Python version and increase performance later. Still, I'm just providing my opinion - I don't have the time to work on this directly, so the final decision will come down to whoever does the actual work. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Wed Jun 11 09:23:23 2014 From: no_reply at codeplex.com (CodePlex) Date: 11 Jun 2014 00:23:23 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/10/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] wrong line dumped out along with warning message ---------------------------------------------- ISSUES 1. [New comment] wrong line dumped out along with warning message http://ironpython.codeplex.com/workitem/35263 User paweljasinski has commented on the issue: "

resolved in 2bfdf86f673afcaa627c68fa53bd3e1ec1bc5613
Please, note that in order to obtain the actual line number you need to run ipy with ```-X:FullFrames -X:Traceing``` arguments.

" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Wed Jun 11 22:46:21 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Wed, 11 Jun 2014 22:46:21 +0200 Subject: [Ironpython-users] launcher cp#35064 In-Reply-To: References: Message-ID: I have a feeling, that by default when the only installed python is ironpython launcher should use the 32 bit version. Is it correct? --pawel On Wed, Jun 11, 2014 at 6:45 PM, Vernon D. Cole wrote: > On Wed, Jun 11, 2014 at 5:06 PM, Jeff Hardy wrote: > >> On Wed, Jun 11, 2014 at 3:46 PM, Vernon D. Cole >> wrote: >>> >>> On Tue, Jun 10, 2014 at 3:55 PM, Pawel Jasinski < >>> pawel.jasinski at gmail.com> wrote: >>> >>>> On Tue, Jun 10, 2014 at 4:30 PM, Jeff Hardy wrote: >>>> >>>>> On Tue, Jun 10, 2014 at 2:57 PM, Pawel Jasinski < >>>>> pawel.jasinski at gmail.com> wrote: >>>>> >>>>>> I had to modify the C and I am not done yet. I made an author of >>>>>> launcher Vinay Sajip and Marc Hammond aware of our cp. The feedback is >>>>>> very conservative: >>>>>> https://bitbucket.org/vinay.sajip/pylauncher/issue/3/command-line-support-for-configuration >>>>>> For now I will fork it so it doesn't delay 2.7.5 >>>>>> >>>>> >>>>> Does an unmodified launcher work with IronPython at all? >>>>> >>>> >>>> yes, in a limited way described by Vernon. >>>> 1. You have to either provide a full path to ipy as shebang >>>> 2. Make sure ipy is in a path and use #! ipy as shebang >>>> >>> >>> You can also have the full path to IronPython in a py.ini file, then use >>> #!ipy as as shebang [assuming that you define "ipy" in your py.ini]. Then >>> py.exe must be in the path, but IronPython does not have to be. (That is >>> the configuration that I use.) >>> >> >> That's the existing launcher behaviour, right? Adding entries to an .ini >> file with the installer is easy. >> > > Correct! > Quoting PEP-397: > > Two .ini files will be searched by the launcher - ``py.ini`` in the > current user's "application data" directory (i.e. the directory returned > by calling the Windows function SHGetFolderPath with CSIDL_LOCAL_APPDATA, > %USERPROFILE%\AppData\Local on Vista+, > %USERPROFILE%\Local Settings\Application Data on XP) > and ``py.ini`` in the same directory as the launcher. > > Since the launcher is normally installed in C:\Windows, the usual location > will be C:\Windows\py.ini > > >> First can be used if script is installed with setup and shebang is >>>> modified at the time of installation >>>> >>>> Ideally, launcher should detect ipy installation based on registry >>>> content - I have already something but not finished. >>>> If shebang line is one of Unix virtual ones or missing, there is no way >>>> to redirect launcher to use ipy as default. >>>> >>>> The existing launcher starts Python2 by default (even though it is >>> installed with Python3). If there is any installation of (C)Python2 in the >>> registry, it will be launched, otherwise Python3. I think that we need to >>> follow that pattern, giving CPython2 the default position. I have proposed >>> adding a [default] section to py.ini so that a user can change that >>> behavior on his own system. >>> >> >> This is more interesting, since if IronPython is the only installed >> Python then it should be used for scripts without #! lines. >> > > I would agree. > > That could be done two ways: > 1) by looking for IronPython in the registry (the same way the launcher > finds CPython) but that would add a lot of code that the CPython > maintainers might not like. (But, then again, they might, since it would > eliminate the installation sequence problem below.) > > 2) it could be done by adding > >> [default] >> > ipy >> > to the py.ini file (if that feature were to be supported). > The downside to that would be that if CPython were later installed on > the same system, IronPython would continue to be preferred -- so the order > of installation would change the behavior -- which is not a desirable thing. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hernandez.sergio.a at gmail.com Wed Jun 11 19:22:04 2014 From: hernandez.sergio.a at gmail.com (Sergio A. Hernandez) Date: Wed, 11 Jun 2014 11:22:04 -0600 Subject: [Ironpython-users] Error when declaring x:Class in WPF Message-ID: Hello all, I'm receiving a funny error when I try to do this in a XAML file: XAML Code: IPython Code: import wpf from System.Windows import * class MyWindow(Window): def __init__(self): wpf.LoadComponent(self, "pySimpleStack.xaml") if __name__ == '__main__': Application().Run(MyWindow()) As soon as I remove the x:Class declaration my program run just fine. But I found useful to keep the declaration of the class, any ideas? Error: An exception of type 'System.Xaml.XamlObjectWriterException' occurred in Snippets.debug.scripting but was not handled in user code Additional information: Specified class name 'ProjectName.WindowName' doesn't match actual root instance type 'IronPython.NewTypes.System.Windows.Window_4$4'. Remove the Class directive or provide an instance via XamlObjectWriterSettings.RootObjectInstance. -- More important than the intelligence to plan is the ability to execute *Sergio A. Hernandez* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Thu Jun 12 11:43:10 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Thu, 12 Jun 2014 10:43:10 +0100 Subject: [Ironpython-users] launcher cp#35064 In-Reply-To: References: Message-ID: On Wed, Jun 11, 2014 at 9:46 PM, Pawel Jasinski wrote: > I have a feeling, that by default when the only installed python is > ironpython launcher should use the 32 bit version. > Is it correct? > For now, yes. The 64-bit JIT is currently slower than the 32-bit one and generates slower code, so unless the extra memory is needed it's best to use the 32-bit version. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Sun Jun 15 09:22:56 2014 From: no_reply at codeplex.com (CodePlex) Date: 15 Jun 2014 00:22:56 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/14/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] stack trace reported out of order by top level exception handler 2. [New issue] unable to use certificate with quoted part in isser 3. [New issue] certifacte is missing subjectAltName ---------------------------------------------- ISSUES 1. [New comment] stack trace reported out of order by top level exception handler http://ironpython.codeplex.com/workitem/35230 User paweljasinski has commented on the issue: "

I am playing with pip which in turn is using colorama. I have seen again stack trace with out of order pattern. This 2.7.5.b2
```
c:\cygwin64\home\rejap>ipy -m ensurepip
Unhandled exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\runpy.py", line 175, in run_module
File "C:\Program Files (x86)\IronPython 2.7\Lib\runpy.py", line 72, in _run_code
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\_vendor\colorama\win32.py", line 86, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\runpy.py", line 81, in _run_module_code
File "C:\Users\rejap\AppData\Roaming\Python\IronPython27\site-packages\ensurepip\__main__.py", line 4, in <module>
File "C:\Users\rejap\AppData\Roaming\Python\IronPython27\site-packages\ensurepip\__init__.py", line 211, in _main
File "C:\Users\rejap\AppData\Roaming\Python\IronPython27\site-packages\ensurepip\__init__.py", line 121, in bootstrap
File "C:\Users\rejap\AppData\Roaming\Python\IronPython27\site-packages\ensurepip\__init__.py", line 42, in _run_pip
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\__init__.py", line 9, in <module>
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\log.py", line 9, in <module>
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\_vendor\colorama\__init__.py", line 2, in <module>
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\_vendor\colorama\initialise.py", line 5, in <module>
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\_vendor\colorama\ansitowin32.py", line 6, in <module>
File "c:\users\rejap\appdata\local\temp\tmpidpff9\pip-1.5.6-py2.py3-none-any.whl\pip\_vendor\colorama\winterm.py", line 2, in <module>
TypeError: expected unsigned long, got int
```

3rd frame should be the bottom one.

"----------------- 2. [New issue] unable to use certificate with quoted part in isser http://ironpython.codeplex.com/workitem/35293 User paweljasinski has proposed the issue: "Processing of certificate from pip.python.org causes: SSLError: [Errno Unknown field: ] Inc." The ceritifacte contains issuer: issuer: CN=*.c.ssl.fastly.net, O="Fastly, Inc.", L=San Francisco, S=California, C=US"----------------- 3. [New issue] certifacte is missing subjectAltName http://ironpython.codeplex.com/workitem/35294 User paweljasinski has proposed the issue: "subjectAltName is not populated" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Mon Jun 16 09:22:03 2014 From: no_reply at codeplex.com (CodePlex) Date: 16 Jun 2014 00:22:03 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/15/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] cannot import idna from encodings (Ipy 2.7.3) 2. [New issue] zlib - ValueError: Invalid initialization option 3. [New comment] zlib - ValueError: Invalid initialization option ---------------------------------------------- ISSUES 1. [New comment] cannot import idna from encodings (Ipy 2.7.3) http://ironpython.codeplex.com/workitem/34651 User paweljasinski has commented on the issue: "

I have spotted this one in 2.7.5b2:
```
>>> 'a'.encode('idna')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
LookupError: unknown encoding: idna
```

"----------------- 2. [New issue] zlib - ValueError: Invalid initialization option http://ironpython.codeplex.com/workitem/35295 User paweljasinski has proposed the issue: "Assuming chunk.gz contains valid gzip file, the following code works under cpython but fails under iron: with open("chunk.gz","r") as f: data = f.read() import zlib d = zlib.decompressobj(16 + zlib.MAX_WBITS) print d.decompress(data) output: $ ipy dec.py Traceback (most recent call last): File "dec.py", line 6, in ValueError: Invalid initialization option "----------------- 3. [New comment] zlib - ValueError: Invalid initialization option http://ironpython.codeplex.com/workitem/35295 User paweljasinski has commented on the issue: "

The missing bit out of http://www.zlib.net/manual.html


windowBits can also be greater than 15 for optional gzip decoding. Add 32 to windowBits to enable zlib and gzip decoding with automatic header detection, or add 16 to decode only the gzip format (the zlib format will return a Z_DATA_ERROR). If a gzip stream is being decoded, strm->adler is a crc32 instead of an adler32.

" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Tue Jun 17 09:23:38 2014 From: no_reply at codeplex.com (CodePlex) Date: 17 Jun 2014 00:23:38 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/16/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] cannot import idna from encodings (Ipy 2.7.3) 2. [New comment] zlib - ValueError: Invalid initialization option 3. [New comment] zlib - ValueError: Invalid initialization option 4. [New issue] built-in translate throws TypeError 5. [New issue] unexpected behaviour when IOError used as superclass ---------------------------------------------- ISSUES 1. [New comment] cannot import idna from encodings (Ipy 2.7.3) http://ironpython.codeplex.com/workitem/34651 User paweljasinski has commented on the issue: "

I think there is something wrong in installation/packaging. When I run the ipy out of my development location, it works as expected. When I run ipy out of install dir (c:\Program Files (x86)\IronPython27) it doesn't.
I also tried on the separate Windows machine without any development tools installed with fresh new installation of 2.7.5b2 - it doesn't work.

"----------------- 2. [New comment] zlib - ValueError: Invalid initialization option http://ironpython.codeplex.com/workitem/35295 User paweljasinski has commented on the issue: "

workaround:
```
with open("chunk.gz","r") as f:
data = f.read()
import zlib
d = zlib.decompressobj(-zlib.MAX_WBITS)
print d.decompress(data[10:])

"----------------- 3. [New comment] zlib - ValueError: Invalid initialization option http://ironpython.codeplex.com/workitem/35295 User paweljasinski has commented on the issue: "

workaround:
```
with open("chunk.gz","r") as f:
data = f.read()
import zlib
d = zlib.decompressobj(-zlib.MAX_WBITS)
print d.decompress(data[10:])
```

The edit button on your own post is missing :-(

"----------------- 4. [New issue] built-in translate throws TypeError http://ironpython.codeplex.com/workitem/35296 User paweljasinski has proposed the issue: "the following code: import string d=dict([(ord(c), ord(c.lower())) for c in string.ascii_uppercase]) print u"A".translate(d) results in: Traceback (most recent call last): File "translate.py", line 3, in TypeError: Unable to cast object of type 'System.Int32' to type 'System.String'. "----------------- 5. [New issue] unexpected behaviour when IOError used as superclass http://ironpython.codeplex.com/workitem/35300 User paweljasinski has proposed the issue: "This snippet works in cpython, but fails in iron: class RequestException(IOError): def __init__(self, *args, **kwargs): self.response = kwargs.pop('response', None) super(RequestException, self).__init__(*args, **kwargs) class HTTPError(RequestException): """An HTTP error occurred.""" raise HTTPError("a", response="b") the reported error is: Traceback (most recent call last): File "httpex.py", line 13, in TypeError: HTTPError() takes at least 1 argument (3 given) when the IOError is changed to Exception, things are back to normal" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Fri Jun 20 09:23:42 2014 From: no_reply at codeplex.com (CodePlex) Date: 20 Jun 2014 00:23:42 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/19/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] unexpected behaviour when IOError used as superclass 2. [New issue] izip_longest: argument after * must be a sequence, not dictionary-valueiterator 3. [New comment] izip_longest: argument after * must be a sequence, not dictionary-valueiterator ---------------------------------------------- ISSUES 1. [New comment] unexpected behaviour when IOError used as superclass http://ironpython.codeplex.com/workitem/35300 User paweljasinski has commented on the issue: "

the culprit is EnironmentError class, or _EnvironmentError class to be exact.
It provides __new__ method with a signature without kwArgs argument. The __new__ method is used to provide additional restrictions on arguments for __init__ method.

"----------------- 2. [New issue] izip_longest: argument after * must be a sequence, not dictionary-valueiterator http://ironpython.codeplex.com/workitem/35305 User Demon has proposed the issue: "izip_longest fails with an asterisked dictionary-valueiterator Code: >>> import itertools >>> itertools.izip_longest(*{}.itervalues()) Traceback (most recent call last): File "", line 1, in TypeError: izip_longest() argument after * must be a sequence, not dictionary-valueiterator [ while this works obviously: itertools.izip_longest(*{}.values()) ] The same code works with CPython 2.7.5 IPy version: IronPython 2.7.4 (2.7.0.40) on .NET 4.0.30319.18063 (32-bit)"----------------- 3. [New comment] izip_longest: argument after * must be a sequence, not dictionary-valueiterator http://ironpython.codeplex.com/workitem/35305 User paweljasinski has commented on the issue: "

The problem originates in ```IronLanguages\Runtime\Microsoft.Dynamic\Actions\Calls\DefaultOverloadResolver.cs``` line 185

" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Tue Jun 24 09:21:03 2014 From: no_reply at codeplex.com (CodePlex) Date: 24 Jun 2014 00:21:03 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/23/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New comment] cStringIO.StringIO(u"...").tell() broken with unicode strings 2. [New comment] Cannot redirect stdout using cStringIO.StringIO on Mono 3. [New comment] StringIO read and writes fail with "ValueError: write to closed file" exception 4. [New comment] io.StringIO always closed 5. [New comment] top level exception handler of the interpreter prints wrong traceback 6. [New comment] unable to use certificate with quoted part in issuer 7. [New comment] certifacte is missing subjectAltName 8. [New comment] built-in translate throws TypeError 9. [New comment] unexpected behaviour when IOError used as superclass ---------------------------------------------- ISSUES 1. [New comment] cStringIO.StringIO(u"...").tell() broken with unicode strings http://ironpython.codeplex.com/workitem/19220 User paweljasinski has commented on the issue: "

fixed in 1292d27f3405cead16393d04296fb7ef96019d39

"----------------- 2. [New comment] Cannot redirect stdout using cStringIO.StringIO on Mono http://ironpython.codeplex.com/workitem/26105 User paweljasinski has commented on the issue: "

this work under windows in 1292d27f3405cead16393d04296fb7ef96019d39
can someone confirm fix in mono?

"----------------- 3. [New comment] StringIO read and writes fail with "ValueError: write to closed file" exception http://ironpython.codeplex.com/workitem/34683 User paweljasinski has commented on the issue: "

fixed in 1292d27f3405cead16393d04296fb7ef96019d39

"----------------- 4. [New comment] io.StringIO always closed http://ironpython.codeplex.com/workitem/34713 User paweljasinski has commented on the issue: "

fixed in 1292d27f3405cead16393d04296fb7ef96019d39

"----------------- 5. [New comment] top level exception handler of the interpreter prints wrong traceback http://ironpython.codeplex.com/workitem/34849 User paweljasinski has commented on the issue: "

fixed in 384d8610f299ef5ac71e4af8d531928d8002633f

"----------------- 6. [New comment] unable to use certificate with quoted part in issuer http://ironpython.codeplex.com/workitem/35293 User paweljasinski has commented on the issue: "

fixed in 1a33f5a6c2eb9289829347abb8b0042d49f1f710

"----------------- 7. [New comment] certifacte is missing subjectAltName http://ironpython.codeplex.com/workitem/35294 User paweljasinski has commented on the issue: "

fixed in 491468d02d2338acdbcadd0a4010f780354efed3

"----------------- 8. [New comment] built-in translate throws TypeError http://ironpython.codeplex.com/workitem/35296 User paweljasinski has commented on the issue: "

fixed in 5e8b294e7636c3aa5430085cfcc7a8fbfb18e311

"----------------- 9. [New comment] unexpected behaviour when IOError used as superclass http://ironpython.codeplex.com/workitem/35300 User paweljasinski has commented on the issue: "

fixed in 43a8059bc282f92e0ef585f93c0bf30880d3a18c

" ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eraserwars at gmail.com Mon Jun 23 23:46:57 2014 From: eraserwars at gmail.com (Daniel Hsu) Date: Mon, 23 Jun 2014 17:46:57 -0400 Subject: [Ironpython-users] Pyvisa Compatibility Message-ID: Hi All, Has there been any progress recently on Pyvisa compatibility? I'm trying to connect to several machines using Pyvisa but it doesn't recognize visa when I try to "import visa". Visa has worked with Python 3.4 for me before. I get the error message "No module named visa" when I try to import visa in IronPython though. Any help would be appreciated. Thanks, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From slide.o.mix at gmail.com Tue Jun 24 17:11:20 2014 From: slide.o.mix at gmail.com (Slide) Date: Tue, 24 Jun 2014 08:11:20 -0700 Subject: [Ironpython-users] Pyvisa Compatibility In-Reply-To: References: Message-ID: Does pyvisa use a native extension anywhere? If so, then it will not work with IronPython. On Mon, Jun 23, 2014 at 2:46 PM, Daniel Hsu wrote: > Hi All, > > Has there been any progress recently on Pyvisa compatibility? I'm trying > to connect to several machines using Pyvisa but it doesn't recognize visa > when I try to "import visa". Visa has worked with Python 3.4 for me before. > I get the error message "No module named visa" when I try to import visa in > IronPython though. Any help would be appreciated. > > Thanks, > Daniel > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > > -- Website: http://earl-of-code.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Wed Jun 25 09:26:10 2014 From: no_reply at codeplex.com (CodePlex) Date: 25 Jun 2014 00:26:10 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/24/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New issue] __name__ not set to __main__ ---------------------------------------------- ISSUES 1. [New issue] __name__ not set to __main__ http://ironpython.codeplex.com/workitem/35322 User paweljasinski has proposed the issue: "another cpython difference $ python -c "print __name__" __main__ $ ipy -c "print __name__" " ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From slide.o.mix at gmail.com Thu Jun 26 01:47:43 2014 From: slide.o.mix at gmail.com (Slide) Date: Wed, 25 Jun 2014 16:47:43 -0700 Subject: [Ironpython-users] IRC Message-ID: Is anyone interested in having more realtime type discussions via IRC? Any particular network people hang out on? -- Website: http://earl-of-code.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Thu Jun 26 13:06:53 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Thu, 26 Jun 2014 12:06:53 +0100 Subject: [Ironpython-users] IRC In-Reply-To: References: Message-ID: On Thu, Jun 26, 2014 at 12:47 AM, Slide wrote: > Is anyone interested in having more realtime type discussions via IRC? Any > particular network people hang out on? > Work blocks external IRC, so that's not much good for me. jabbr.net might be an option - I created https://jabbr.net/#/rooms/ironpython if anyone is interested. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From deneefe at istore.com Thu Jun 26 15:27:22 2014 From: deneefe at istore.com (Robert DeNeefe) Date: Thu, 26 Jun 2014 13:27:22 +0000 Subject: [Ironpython-users] release date for 2.7.5 Message-ID: <4510CD037FB6AF49804933541347B33A2B8E3AD6@ISXCHG.istore.com> Is there a target date for releasing 2.7.5? This email is intended solely for the person or entity to which it is addressed and may contain confidential and/or privileged information. Copying, forwarding or distributing this message by persons or entities other than the addressee is prohibited. If you have received this email in error, please contact the sender immediately and delete the material from any computer. This email may have been monitored for policy compliance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Thu Jun 26 17:14:34 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Thu, 26 Jun 2014 16:14:34 +0100 Subject: [Ironpython-users] release date for 2.7.5 In-Reply-To: <4510CD037FB6AF49804933541347B33A2B8E3AD6@ISXCHG.istore.com> References: <4510CD037FB6AF49804933541347B33A2B8E3AD6@ISXCHG.istore.com> Message-ID: On Thu, Jun 26, 2014 at 2:27 PM, Robert DeNeefe wrote: > Is there a target date for releasing 2.7.5? > Late April sometime. Err, late June. Err... No, the problem has been scope creep (actually, the main problem is that Pawel keeps fixing things), along with some RL time constraints on my part. Right now I want to check how pip/ensurepip works and release Beta 3 based on that. If I get time in between I'll see if I can get virtualenv to work as well. I'm going to draw the line for final somewhere in August, because it can't drag out forever - so let's say August 31 at the latest. - Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.jasinski at gmail.com Thu Jun 26 20:06:20 2014 From: pawel.jasinski at gmail.com (Pawel Jasinski) Date: Thu, 26 Jun 2014 20:06:20 +0200 Subject: [Ironpython-users] release date for 2.7.5 In-Reply-To: References: <4510CD037FB6AF49804933541347B33A2B8E3AD6@ISXCHG.istore.com> Message-ID: the only thing which is not likely to get accepted upstream for ensurepip/pip is the zlib hack. I would like to have it in 2.7.5. I hope I can handle the 'inflate' part. Once this is done, I promise to switch into regression/conservative mode. There is for sure a thing or two coming out of pylauncher. I would be very happy to see 2.7.5 at the end of August. Virtualenv would be a fantastic bonus! --pawel On Thu, Jun 26, 2014 at 5:14 PM, Jeff Hardy wrote: > On Thu, Jun 26, 2014 at 2:27 PM, Robert DeNeefe > wrote: > >> Is there a target date for releasing 2.7.5? >> > > Late April sometime. Err, late June. Err... > > No, the problem has been scope creep (actually, the main problem is that > Pawel keeps fixing things), along with some RL time constraints on my part. > Right now I want to check how pip/ensurepip works and release Beta 3 based > on that. If I get time in between I'll see if I can get virtualenv to work > as well. > > I'm going to draw the line for final somewhere in August, because it can't > drag out forever - so let's say August 31 at the latest. > > - Jeff > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > https://mail.python.org/mailman/listinfo/ironpython-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From deneefe at istore.com Thu Jun 26 15:27:22 2014 From: deneefe at istore.com (Robert DeNeefe) Date: Thu, 26 Jun 2014 13:27:22 +0000 Subject: [Ironpython-users] release date for 2.7.5 Message-ID: <4510CD037FB6AF49804933541347B33A2B8E3AD6@ISXCHG.istore.com> Is there a target date for releasing 2.7.5? This email is intended solely for the person or entity to which it is addressed and may contain confidential and/or privileged information. Copying, forwarding or distributing this message by persons or entities other than the addressee is prohibited. If you have received this email in error, please contact the sender immediately and delete the material from any computer. This email may have been monitored for policy compliance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no_reply at codeplex.com Fri Jun 27 09:28:48 2014 From: no_reply at codeplex.com (CodePlex) Date: 27 Jun 2014 00:28:48 -0700 Subject: [Ironpython-users] IronPython, Daily Digest 6/26/2014 Message-ID: Hi ironpython, Here's your Daily Digest of new issues for project "IronPython". In today's digest:ISSUES 1. [New issue] ctype legacy support ---------------------------------------------- ISSUES 1. [New issue] ctype legacy support http://ironpython.codeplex.com/workitem/35326 User paweljasinski has proposed the issue: "the following appears to be agreed way of doing things in cpython: import ctypes import ctypes.wintypes GetStdHandle = ctypes.windll.kernel32.GetStdHandle GetStdHandle.argtypes = [ ctypes.wintypes.DWORD, ] GetStdHandle.restype = ctypes.wintypes.HANDLE print GetStdHandle(-11) # line 8 Reference: http://stackoverflow.com/questions/17993814/why-the-irrelevant-code-made-a-difference When run, ironpython has a problem with it. rejap at WIN-CUE1I6EN9JB ~/tmp $ ipy ctypes-stdhandle.py Traceback (most recent call last): File "ctypes-stdhandle.py", line 8, in TypeError: expected unsigned long, got int and the same under cpython 2.7-64: rejap at WIN-CUE1I6EN9JB ~/tmp $ /c/Python27/python ctypes-stdhandle.py 424 " ---------------------------------------------- ---------------------------------------------- You are receiving this email because you subscribed to notifications on CodePlex. To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker. You can unsubscribe or change your issue notification settings on CodePlex.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: