From panos.laganakos at gmail.com Fri Jan 1 16:57:12 2021 From: panos.laganakos at gmail.com (Panos Laganakos) Date: Fri, 1 Jan 2021 23:57:12 +0200 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion Message-ID: Hello, I was going over the PyPy website the other day, and something kept bugging me. While the project itself is in great condition really amazing at what it does, the website felt that it wasn't giving that information to the average visitor. So, I took a stab at it, with a basic mockup: https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 And here is an annotated version: https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 (hover over the image to see the annotations) And an announcement header one: https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 Let me know what you think. -- P?no? https://panoslaganakos.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at manueljacob.de Sun Jan 3 20:01:40 2021 From: me at manueljacob.de (Manuel Jacob) Date: Mon, 4 Jan 2021 02:01:40 +0100 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: Hi P?no?, I very much like the content and visual structure of your suggestion. However, I don?t like the color scheme of the website and the font of the quote. Unfortunately, this is not very constructive, as I don?t have a better suggestion. If you have more versions with different colors, I would be interested to see them. -Manuel On 01/01/2021 22.57, Panos Laganakos wrote: > Hello, > > I was going over the PyPy website the other day, and something kept > bugging me. While the project itself is in great condition really > amazing at what it does, the website felt that it wasn't giving that > information to the average visitor. > > So, I took a stab at it, with a basic mockup: > https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 > > > And here is an annotated version: > https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 > > (hover over the image to see the annotations) > > And an announcement header one: > https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 > > > Let me know what you think. > > > -- > P?no? > https://panoslaganakos.com > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From matti.picus at gmail.com Mon Jan 4 01:26:11 2021 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 4 Jan 2021 08:26:11 +0200 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: <228bba9c-9102-81a9-118e-9c2ca545ca5d@gmail.com> On 1/1/21 11:57 PM, Panos Laganakos wrote: > Hello, > > I was going over the PyPy website the other day, and something kept > bugging me. While the project itself is in great condition really > amazing at what it does, the website felt that it wasn't giving that > information to the average visitor. > > So, I took a stab at it, with a basic mockup: > https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 > > And here is an annotated version: > https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 > (hover over the image to see the annotations) > > And an announcement header one: > https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 > > Let me know what you think. > > > -- > P?no? > https://panoslaganakos.com > Thanks for the suggestions. That is quite an overhaul! I took the liberty of opening an issue on the repo that builds the website https://foss.heptapod.net/pypy/pypy.org/-/issues/7. The site is built using Nikola and a pypy-specific theme. Would you be up to trying to implement parts/all of the design as merge requests to that repo? Then we could fine tune the ideas iteratively. Matti From anto.cuni at gmail.com Mon Jan 4 05:03:16 2021 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 4 Jan 2021 11:03:16 +0100 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: Hello P?no?, thank you for this! Personally I like it a lot. As Matti pointed out, in order to be used it needs to be turned into a Nikola theme but hopefully it's not too hard. Historically we as a group have been very bad at designing and implementing the website, so having someone who cares and can do this is awesome! My personal hope is that this is the starting point for you to become a regular contributor to PyPy, we would appreciate it a lot :). ciao, Anto On Sat, Jan 2, 2021 at 1:25 AM Panos Laganakos wrote: > Hello, > > I was going over the PyPy website the other day, and something kept > bugging me. While the project itself is in great condition really amazing > at what it does, the website felt that it wasn't giving that information to > the average visitor. > > So, I took a stab at it, with a basic mockup: > https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 > > And here is an annotated version: > https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 > (hover over the image to see the annotations) > > And an announcement header one: > > https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 > > Let me know what you think. > > > -- > P?no? > https://panoslaganakos.com > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From panos.laganakos at gmail.com Mon Jan 4 06:43:43 2021 From: panos.laganakos at gmail.com (Panos Laganakos) Date: Mon, 4 Jan 2021 13:43:43 +0200 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: Glad you like it! Yeah, while the project itself is a banger, the "image" of PyPy is a bit lacking. But nothing that can't be fixed. I've set up my work schedule to have time to contribute to one non-work project, so I'm here to help with it if I can. Yes, I can look into implementing this properly into Nikola, after we settle on a design. Matti: I'll change the quote font. As for the color scheme I'll try a few and upload the variants and we can pick the one that most of you like best. On Mon, Jan 4, 2021 at 12:03 PM Antonio Cuni wrote: > Hello P?no?, > thank you for this! > > Personally I like it a lot. As Matti pointed out, in order to be used it > needs to be turned into a Nikola theme but hopefully it's not too hard. > Historically we as a group have been very bad at designing and > implementing the website, so having someone who cares and can do this is > awesome! > > My personal hope is that this is the starting point for you to become a > regular contributor to PyPy, we would appreciate it a lot :). > > ciao, > Anto > > > > On Sat, Jan 2, 2021 at 1:25 AM Panos Laganakos > wrote: > >> Hello, >> >> I was going over the PyPy website the other day, and something kept >> bugging me. While the project itself is in great condition really amazing >> at what it does, the website felt that it wasn't giving that information to >> the average visitor. >> >> So, I took a stab at it, with a basic mockup: >> https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 >> >> And here is an annotated version: >> >> https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 >> (hover over the image to see the annotations) >> >> And an announcement header one: >> >> https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 >> >> Let me know what you think. >> >> >> -- >> P?no? >> https://panoslaganakos.com >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > -- P?no? https://panoslaganakos.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Mon Jan 4 08:29:59 2021 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 4 Jan 2021 14:29:59 +0100 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: On Mon, Jan 4, 2021 at 12:43 PM Panos Laganakos wrote: > Glad you like it! > > Yeah, while the project itself is a banger, the "image" of PyPy is a bit > lacking. But nothing that can't be fixed. I've set up my work schedule to > have time to contribute to one non-work project, so I'm here to help with > it if I can. > wonderful! Let's fill this non-work project time with PyPy :) You will probably need a heptapod account to comment on issues and/or make MR: please follow the instructions here and we will be glad to give you access: https://doc.pypy.org/en/latest/contributing.html#get-access Also, most of us hang out on #pypy on freenode, so feel free to join if you like to have more real-time communication. ciao, Anto -------------- next part -------------- An HTML attachment was scrubbed... URL: From calderonchristian73 at gmail.com Mon Jan 4 19:53:57 2021 From: calderonchristian73 at gmail.com (Christian Calderon) Date: Mon, 4 Jan 2021 16:53:57 -0800 Subject: [pypy-dev] Funding for M1 Message-ID: Hello PyPy team, If you haven't acquired funding for an M1 machine yet I'd be happy to make a donation to cover the cost. Just let me know the exact amount you need and what method you would like to receive the funding through. Thanks for all the good work! ~ Christian Calderon -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.augier at univ-grenoble-alpes.fr Tue Jan 5 08:42:12 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Tue, 5 Jan 2021 14:42:12 +0100 (CET) Subject: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? Message-ID: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> Hello, I wish you a Happy New Year. I would be very interested in using PyPy for numerically intensive scientific codes. I conclude from my experiments that PyPy could potentially be a great tool but that it is strongly limited in this area by the lack of some features in Python. For example: - Standard Python classes and standard Python instances are too dynamic for what we need for high performance. There should be a way to define in Python less dynamic classes and instances. - There is no specialized container to gather a fixed number of homogeneous objects and when possible to store them efficiently as native arrays of native variables. - There is no way to locally disable type and bound checks. I guess we don't have that in Python because it wouldn't be very useful for CPython and because Python has not been designed for performance. However, with efficient interpreters, I think many users would be happy to trade a bit of dynamism for better performance. I thought about what was missing and it seems to me that it could be provided by a Python/PyPy extension without addition to the Python language. However, Numpy API is IMHO not adapted. Now that Python is (and will be more and more) compared to Julia, I think it becomes necessary to have a good tool to write efficient numerical codes in pure Python style. I present here https://github.com/paugier/nbabel/blob/master/py/vector.md a possible new extension providing what I think would be needed to express in OOP Python things reasonable in terms of high performance computing. The detail of the proposed API is of course not very interesting at this point (it is just a dream). I am more interested about the point of view of PyPy developers and PyPy users about (i) the principle of this project (a Python extension to express in OOP Python things easier to be accelerated than standard Python) and (ii) about the technical feasibility of this project: Is it technically possible to extend Python and PyPy to develop such extension and make it very efficient? Which tools should be used? How should it be written? Best regards, Pierre From yury at shurup.com Tue Jan 5 09:06:02 2021 From: yury at shurup.com (Yury V. Zaytsev) Date: Tue, 5 Jan 2021 15:06:02 +0100 (CET) Subject: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? In-Reply-To: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: <1ee66d8-9dee-f5fa-c64a-4d75afd977a@shurup.com> On Tue, 5 Jan 2021, PIERRE AUGIER wrote: > I thought about what was missing and it seems to me that it could be > provided by a Python/PyPy extension without addition to the Python > language. However, Numpy API is IMHO not adapted. Now that Python is > (and will be more and more) compared to Julia, I think it becomes > necessary to have a good tool to write efficient numerical codes in pure > Python style. Hi Pierre, I assume that you've had a detailed look at Cython, hand't you? All three points that you've listed are solved there in one way or another. Of course, it comes with its own set of tradeoffs / disadvantages, but in my scientific life I can't say I was really constrained by them, because Cython blends with Python so naturally (per module and per function), so I was actually always starting with pure Python and then going down up to the level of SIMD assembly for 1% of the code where it actually mattered (99% time was spent)... plus the whole MPI story for scaling. I'm afraid the situation is simply so good that there is too little motivation to solve this in Python itself :-/ and solving it in Python has its own problems. I guess first one really needs to find cases, when solving it in Python is rationally mandated to gather enough momentum. -- Sincerely yours, Yury V. Zaytsev From panos.laganakos at gmail.com Wed Jan 6 03:33:13 2021 From: panos.laganakos at gmail.com (Panos Laganakos) Date: Wed, 6 Jan 2021 10:33:13 +0200 Subject: [pypy-dev] PyPy.org 2021 redesign suggestion In-Reply-To: References: Message-ID: Thanks Antonio, I joined heptapod and got approved. I've posted updates on the ticket Matti setup, so you can follow all future updates there instead of this thread: https://foss.heptapod.net/pypy/pypy.org/-/issues/7 On Mon, Jan 4, 2021 at 3:30 PM Antonio Cuni wrote: > On Mon, Jan 4, 2021 at 12:43 PM Panos Laganakos > wrote: > >> Glad you like it! >> >> Yeah, while the project itself is a banger, the "image" of PyPy is a bit >> lacking. But nothing that can't be fixed. I've set up my work schedule to >> have time to contribute to one non-work project, so I'm here to help with >> it if I can. >> > > wonderful! Let's fill this non-work project time with PyPy :) > You will probably need a heptapod account to comment on issues and/or make > MR: please follow the instructions here and we will be glad to give you > access: > https://doc.pypy.org/en/latest/contributing.html#get-access > > Also, most of us hang out on #pypy on freenode, so feel free to join if > you like to have more real-time communication. > > ciao, > Anto > -- P?no? https://panoslaganakos.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.augier at univ-grenoble-alpes.fr Wed Jan 6 15:40:52 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Wed, 6 Jan 2021 21:40:52 +0100 (CET) Subject: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? In-Reply-To: <1ee66d8-9dee-f5fa-c64a-4d75afd977a@shurup.com> References: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> <1ee66d8-9dee-f5fa-c64a-4d75afd977a@shurup.com> Message-ID: <1621700934.6140536.1609965652703.JavaMail.zimbra@univ-grenoble-alpes.fr> ----- Mail original ----- > De: "Yury V. Zaytsev" > ?: "PIERRE AUGIER" > Cc: "pypy-dev" > Envoy?: Mardi 5 Janvier 2021 15:06:02 > Objet: Re: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? > On Tue, 5 Jan 2021, PIERRE AUGIER wrote: > >> I thought about what was missing and it seems to me that it could be >> provided by a Python/PyPy extension without addition to the Python >> language. However, Numpy API is IMHO not adapted. Now that Python is >> (and will be more and more) compared to Julia, I think it becomes >> necessary to have a good tool to write efficient numerical codes in pure >> Python style. > > Hi Pierre, > > I assume that you've had a detailed look at Cython, hand't you? All three > points that you've listed are solved there in one way or another. > > Of course, it comes with its own set of tradeoffs / disadvantages, but in > my scientific life I can't say I was really constrained by them, because > Cython blends with Python so naturally (per module and per function), so I > was actually always starting with pure Python and then going down up to > the level of SIMD assembly for 1% of the code where it actually mattered > (99% time was spent)... plus the whole MPI story for scaling. I used quite a lot Cython some years ago. I'm actually pretty happy that we don't use it anymore in Fluiddyn packages :-) For ahead-of-time compilation of Python, Transonic-Pythran is in my opinion nicer to use and I usually get more efficient results with nicer codes than with the C-like big Cython extensions that we used to have. > I'm afraid the situation is simply so good that there is too little > motivation to solve this in Python itself :-/ and solving it in Python has > its own problems. I guess first one really needs to find cases, when > solving it in Python is rationally mandated to gather enough momentum. A big issue IMHO with Cython is that Cython code is not compatible with Python and can't be interpreted. So we lose the advantage of an interpreted language in term of development. One small change in this big extension and one needs to recompile everything. For me, debugging is really harder (maybe because I'm not good at debugging native codes). Moreover, actually one needs to know (a bit of) C to write efficient Cython code so that it's difficult for some contributors to understand/develop Cython extensions. Therefore, I'm convinced that the situation is not so good (see also https://fluiddyn.netlify.app/transonic-vision.html). It's also interesting to compare what can be done in Python (and Cython) and in Julia in terms of scientific computing. Again, it would be very useful in the long term to be able to write more efficient codes in "simple" interpreted Python. > -- > Sincerely yours, > Yury V. Zaytsev I played a bit with cffi which is not so far from what would be needed to develop the extension that I'd like to have. For example, this is quite efficient: from cffi import FFI ffi = FFI() ffi.cdef( """ typedef struct { double x, y, z; } point_t; """ ) def sum_x(vec): s = 0.0 for elem in vec: s += elem.x return s points = ffi.new("point_t[]", 1000) In [5]: %timeit sum_x(points) 1.34 ?s ? 0.693 ns per loop compared to in Julia $ julia microbench_sum_x.jl sum_x(positions) 1.031 ?s (1 allocation: 16 bytes) However, `points` is an instance of `_cffi_backend._CDataBase` (as `points[0]`) and it's not possible to "add methods to these objects". As soon as I hide the _cffi_backend._CDataBase points[0] objects in Python objects, it becomes much much slower. This makes me think again that PyPy would really need a nice extension to write Python a bit less dynamic than standard Python but more efficient. So my questions are: Is it technically possible to extend Python and PyPy to develop such extension and make it very efficient? Which tools should be used? How should it be written? Pierre From yury at shurup.com Wed Jan 6 16:56:39 2021 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 6 Jan 2021 22:56:39 +0100 (CET) Subject: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? In-Reply-To: <1621700934.6140536.1609965652703.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> <1ee66d8-9dee-f5fa-c64a-4d75afd977a@shurup.com> <1621700934.6140536.1609965652703.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: <45955343-2dcd-1c5a-18db-60874726e8f@shurup.com> On Wed, 6 Jan 2021, PIERRE AUGIER wrote: > A big issue IMHO with Cython is that Cython code is not compatible with > Python and can't be interpreted. So we lose the advantage of an > interpreted language in term of development. One small change in this > big extension and one needs to recompile everything. That's a valid point to a certain extent - however, in my experience, I was always somehow able to extract individual small functions in mini-modules and then I wrote some Makefile / setuptools glue to automate chained recompilation of all parts that changed, whenever I ran unit tests or command line interface so recompilation kept annoying me only until I've got the magic to work :-) > For me, debugging is really harder (maybe because I'm not good at > debugging native codes). Moreover, actually one needs to know (a bit of) > C to write efficient Cython code so that it's difficult for some > contributors to understand/develop Cython extensions. I must admit that I never needed to debug anything because I was doing TDD in the fist place, but probably you are right - debugging generated monster codes must be quite scary as compared to pure Python code with full IDE support like PyCharm. Anyways, call me chauvinist, but I'd say it's just a sad fact of life that you need to know a thing or two about writing correct numeric low-level performance oriented code. I assume you know it anyways and I'm sure that your worked up summation example below was just to make a completely different point, but as a matter of fact in your code the worst-case error grows proportionally to the number of elements in the vector (N) and RMS error grows proportionally to the square root of N for random inputs, so the results of your computations are going to be accordingly pretty random in the general case ;-) Where I'm getting with this is that people who do this kind of stuff are somehow not bothered by Cython problems, and people who don't are rightfully bothered by valid issues, but if they are going to be helped, will it help their cause :-) ? Who knows... On top of that, again, there is the whole MPI story. I used to write Python stuff that scaled to the hundreds of thousands of cores. I still did SIMD inside OpenMP threads on the local nodes on top of that just for kicks, but actually I could have achieved a factor of 4x speedup just by scheduling my jobs overnight with 4x cores instead and saved myself the trouble. But I wanted trouble, because it was fun :-) Cython and mpi4py make MPI almost criminally easy on Python, so once you get this far, there comes the question - does 2x or 4x on the local node actually matter at all? > So my questions are: Is it technically possible to extend Python and > PyPy to develop such extension and make it very efficient? Which tools > should be used? How should it be written? It is absolutely technically possible and is a good idea in as far as I'm concerned, but I think that the challenge lies in developing conventions for semantics and getting people to accept them. I think that the zoo of various accelerators / compilers / boosters for Python only proves the point that this must be the hard part. As for a backing buffer access mechanism, cffi is definitively a right tool - PyPy can already "see through" it as you've proven with your small example. -- Sincerely yours, Yury V. Zaytsev From pierre.augier at univ-grenoble-alpes.fr Sun Jan 10 16:42:11 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Sun, 10 Jan 2021 22:42:11 +0100 (CET) Subject: [pypy-dev] New Python/PyPy extension for object oriented numerically intensive codes ? In-Reply-To: <45955343-2dcd-1c5a-18db-60874726e8f@shurup.com> References: <1036764626.5121719.1609854132245.JavaMail.zimbra@univ-grenoble-alpes.fr> <1ee66d8-9dee-f5fa-c64a-4d75afd977a@shurup.com> <1621700934.6140536.1609965652703.JavaMail.zimbra@univ-grenoble-alpes.fr> <45955343-2dcd-1c5a-18db-60874726e8f@shurup.com> Message-ID: <738001098.7626447.1610314931325.JavaMail.zimbra@univ-grenoble-alpes.fr> Hello, I thought again about this performance issue and this possible extension. One very important point is to be able to define immutable structures (like Julia struct) in Python and vectors of such structures. This is really a big advantage of Julia which makes it much more efficient (see https://github.com/paugier/nbabel/tree/master/py/microbench). I completely rewrote a new version of the presentation of this potential new extension: https://github.com/paugier/nbabel/blob/master/py/vector_v2.md It is much less focused on array programming and more on simple object-oriented programming. It would allow one to write the equivalents of very efficient Julia codes in Python, for example something like that: ```python import ooperf as oop @oop.native_bag class Point4D: # an immutable struct x: float y: float z: float w: float def square(self): return self.x**2 + self.y**2 + self.z**2 + self.w**2 Points = oop.Vector[Point4D] points = Points.empty(1000) ``` I have 2 questions: - Please, can anyone tell me how an extension providing native_bag and Vector could be written for PyPy? Which tool should be used? - I also see that Julia is able to vectorize code like the line in `square` but not PyPy (even for cffi struct). Why? Is there a deep reason for that? I conclude from my small experiments that cffi + Python is not sufficient. Pierre From pierre.augier at univ-grenoble-alpes.fr Thu Jan 14 10:34:12 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Thu, 14 Jan 2021 16:34:12 +0100 (CET) Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? Message-ID: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> Hello, I was still playing with the idea to speedup codes using small numerical objects. I wrote a Cython extension which defines a Point (3d) cdef class and a Points cdef class (a vector of points). Both classes contain a pointer towards a point_ C struct: ctypedef struct point_: float x, y, z Of course, any computation with Point objects with involved several very short lived objects and we really want to avoid all the associated malloc/free calls. In Cython, one can decorate a cdef class with `@cython.freelist(8)` to reused objects: https://cython.readthedocs.io/en/latest/src/userguide/extension_types.html#fast-instantiation I try to add a bit of logic to avoid freeing and allocating the memory for the struct (https://github.com/paugier/nbabel/blob/master/py/microbench/util_cython.pyx). If I understand correctly, doing such things is possible in CPython because the method __dealloc__ is called as soon as the objects is not accessible from Python. Or we can use the fact that it's very fast to get the reference count of an instance. But I think it is not the case for PyPy. Is there an alternative strategy efficient with PyPy? Pierre From tbaldridge at gmail.com Thu Jan 14 11:05:17 2021 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Thu, 14 Jan 2021 09:05:17 -0700 Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? In-Reply-To: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: Well if you write this in pure Python and run it via PyPy I imagine that most of the time the Point objects won't be created at all, as the JIT will detect that the are created and don't escape the scope of the JIT loop, so they can be ripped apart and stored in locals. But also these sort of optimizations make less sense in a GC environment where allocation is (almost) free, and the cost of freeing objects is lowered due to bulk reclaiming of objects. I'd try writing some tests in pure Python, running PyPy with jit tracing and see what it spits out in the log. On Thu, Jan 14, 2021 at 8:34 AM PIERRE AUGIER wrote: > > Hello, > > I was still playing with the idea to speedup codes using small numerical objects. > > I wrote a Cython extension which defines a Point (3d) cdef class and a Points cdef class (a vector of points). Both classes contain a pointer towards a point_ C struct: > > ctypedef struct point_: > float x, y, z > > Of course, any computation with Point objects with involved several very short lived objects and we really want to avoid all the associated malloc/free calls. > > In Cython, one can decorate a cdef class with `@cython.freelist(8)` to reused objects: https://cython.readthedocs.io/en/latest/src/userguide/extension_types.html#fast-instantiation > > I try to add a bit of logic to avoid freeing and allocating the memory for the struct (https://github.com/paugier/nbabel/blob/master/py/microbench/util_cython.pyx). If I understand correctly, doing such things is possible in CPython because the method __dealloc__ is called as soon as the objects is not accessible from Python. Or we can use the fact that it's very fast to get the reference count of an instance. But I think it is not the case for PyPy. > > Is there an alternative strategy efficient with PyPy? > > Pierre > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From cfbolz at gmx.de Fri Jan 15 01:44:01 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Fri, 15 Jan 2021 07:44:01 +0100 Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? In-Reply-To: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: <2F330ADC-910F-4787-8FE7-AA34B4F7ABC4@gmx.de> Hi Pierre, This is not ready at all and I don't have enough time to work on it at the moment, *however*: I have a small prototype (on the branch map-improvements) that changes the instance layout in PyPy to store type-stable instances with several fields that contain ints or floats much more efficiently. It seems to give a 50% speedup on your micro benchmark, so that's promising. There's still a bug somewhere and it needs very careful investigation whether it costs too much on non-numerical programs, but potentially this is a good improvement. Cython is not likely to help on PyPy, because the overhead of our C-API emulation is too high. A free list is also unfortunately not really workable for us, since our GC strategy is very different (we don't know when an object is freed). Cheers, Carl Friedrich On January 14, 2021 4:34:12 PM GMT+01:00, PIERRE AUGIER wrote: >Hello, > >I was still playing with the idea to speedup codes using small >numerical objects. > >I wrote a Cython extension which defines a Point (3d) cdef class and a >Points cdef class (a vector of points). Both classes contain a pointer >towards a point_ C struct: > >ctypedef struct point_: > float x, y, z > >Of course, any computation with Point objects with involved several >very short lived objects and we really want to avoid all the associated >malloc/free calls. > >In Cython, one can decorate a cdef class with `@cython.freelist(8)` to >reused objects: >https://cython.readthedocs.io/en/latest/src/userguide/extension_types.html#fast-instantiation > >I try to add a bit of logic to avoid freeing and allocating the memory >for the struct >(https://github.com/paugier/nbabel/blob/master/py/microbench/util_cython.pyx). >If I understand correctly, doing such things is possible in CPython >because the method __dealloc__ is called as soon as the objects is not >accessible from Python. Or we can use the fact that it's very fast to >get the reference count of an instance. But I think it is not the case >for PyPy. > >Is there an alternative strategy efficient with PyPy? > >Pierre >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From muke101 at protonmail.com Mon Jan 18 19:45:58 2021 From: muke101 at protonmail.com (muke101) Date: Tue, 19 Jan 2021 00:45:58 +0000 Subject: [pypy-dev] Contributing Polyhedral Optimisations in PyPy In-Reply-To: References: <1CdAb8DZ8jmSE-75kiASlGgHA96yRAGV1M9SJEYHYGkJlnOEircR3ybjk_Vk_Z05l00ouHJGqX1MJwnAb5nyCfmtdDIn9gHI5ckj8rMXnSs=@protonmail.com> Message-ID: Hi, so to update you both I have decided to pursue this project after all, I'm very excited to work on PyPy. To reiterate my objective, I'll be trying to formulate a way to augment the JIT optimiser to expose enough information such that more advanced optimisations can be implemented, with Polyhedral compatibility in mind. Of the ideas suggested, I'm currently leaning towards trying to create a second optimisation layer for sufficiently hot code, which can take into account more of the program at once, as this seems similar to what other JIT compilers already employ. This is open to the problem of assumptions being invalidated that Armin bought up (if I understood correctly), but similar implementations like in the paper I referred to below have formulated methods to accommodate for this. I think the key for PyPy would be figuring out how to track the correct metadata from previously seen traces such that the bytecode from relevant control paths can be bought together to work on, and mainly reconstructing entire loops once individual sufficiently hot traces are found. Once this is done then actually any number of optimisations could be preformed. The JIT compiler in the JavaScriptCore engine compiles the hottest bytecode down to LLVM-IR and sends it through the LLVM back end. I had looked into similar possibilities for Python, and it seems only a subset of the language can be compiled to LLVM-IR through Numa though, which is a shame. If focusing on just Polyhedral optimisations though a possibility could be to write a SCoP detector for Python bytecode specifically, raise it to Polyhedral representation and then import it into LLVM's Polly tool, but this is getting ahead a bit. I'll be getting to grips with PyPy's codebase soon, after I'm comfortable with the fundamentals of tracing JIT's. Do you have any suggestions on where to begin specifically for what I'm looking to do? I imagine generally all this will be within the JIT optimiser, but if there's anything specific you can think of please let me know. Thanks. Sent with ProtonMail Secure Email. ??????? Original Message ??????? On Friday, 18 December 2020 18:15, muke101 wrote: > Thanks both of you for getting back to me, these definitely seem like problems worth thinking about first. Looking into it, there has actually been some research already on implementing Polyhedral optimisations in a JIT optimiser, specifically in JavaScript. It's paper (http://impact.gforge.inria.fr/impact2018/papers/polyhedral-javascript.pdf) seems to point out the same problems you both bring up, like SCoP detection and aliasing, and how it worked around them. > > For now then I'll try and consider how ambitious replicating these solutions would be and if they would map into PyPy from JS cleanly - please let me know if any other hurdles come to mind in the meantime though. > > Thanks again for the advise. > > ??????? Original Message ??????? > On Friday, 18 December 2020 18:03, Armin Rigo armin.rigo at gmail.com wrote: > > > Hi, > > On Thu, 17 Dec 2020 at 23:48, William ML Leslie > > william.leslie.ttg at gmail.com wrote: > > > > > The challenge with implementing this in the pypy JIT at this point is > > > that the JIT only sees one control flow path. That is, one loop, and > > > the branches taken within that loop. It does not find out about the > > > outer loop usually until later, and may not ever find out about the > > > content of other control flow paths if they aren't taken. > > > > Note that strictly speaking, the problem is not that you haven't seen > > yet other code paths. It's Python, so you never know what may happen > > in the future---maybe another code path will be taken, or maybe > > someone will do crazy things with `sys._getframe()` or with the > > debugger `pdb`. So merely seeing all paths in a function doesn't > > really buy you a lot. No, the problem is that emitting machine code > > is incremental at the granularity of code paths. At the point where > > we see a new code path, all previously-seen code paths have already > > been completely optimized and turned into machine code, and we don't > > keep much information about them. > > To go beyond this simple model, what we have so far is that we can > > "invalidate" previous code paths at any point, when we figure out that > > they were compiled using assumptions that no longer hold. So using > > it, it would be possible in theory to do any amount of global > > optimizations: save enough additional information as you see each code > > path; use it later in the optimization of additional code paths; > > invalidate some of the old code paths if you figure out that its > > optimizations are no longer valid (but invalidate only, not write a > > new version yet); and when you later see the old code path being > > generated again, optimize it differently. It's all doable, but > > theoretical so far: I don't know of any JIT compiler that seriously > > does things like that. It's certainly worth a research paper IMHO. > > It also looks like quite some work. It's certainly not just "take > > some ideas from [ahead-of-time or full-method] compiler X and apply > > them to PyPy". > > A bient?t, > > Armin. From cfbolz at gmx.de Tue Jan 19 16:49:38 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Tue, 19 Jan 2021 22:49:38 +0100 Subject: [pypy-dev] Contributing Polyhedral Optimisations in PyPy In-Reply-To: References: <1CdAb8DZ8jmSE-75kiASlGgHA96yRAGV1M9SJEYHYGkJlnOEircR3ybjk_Vk_Z05l00ouHJGqX1MJwnAb5nyCfmtdDIn9gHI5ckj8rMXnSs=@protonmail.com> Message-ID: Hi! it's a bit hard to know what what to suggest to start with. Would you be interested in setting up a Zoom call (eg next week, some evening CET) to discuss a bit your concrete plans and timeline? (I for one would be somewhat worried whether all that you are describing is doable in the timing context of a master thesis project, but it may be best to really discuss it person). Cheers, Carl Friedrich On 19.01.21 01:45, muke101 via pypy-dev wrote: > Hi, so to update you both I have decided to pursue this project after all, I'm very excited to work on PyPy. > > To reiterate my objective, I'll be trying to formulate a way to augment the JIT optimiser to expose enough information such that more advanced optimisations can be implemented, with Polyhedral compatibility in mind. Of the ideas suggested, I'm currently leaning towards trying to create a second optimisation layer for sufficiently hot code, which can take into account more of the program at once, as this seems similar to what other JIT compilers already employ. This is open to the problem of assumptions being invalidated that Armin bought up (if I understood correctly), but similar implementations like in the paper I referred to below have formulated methods to accommodate for this. I think the key for PyPy would be figuring out how to track the correct metadata from previously seen traces such that the bytecode from relevant control paths can be bought together to work on, and mainly reconstructing entire loops once individual sufficiently hot traces are found. Once this is done then actually any number of optimisations could be preformed. The JIT compiler in the JavaScriptCore engine compiles the hottest bytecode down to LLVM-IR and sends it through the LLVM back end. I had looked into similar possibilities for Python, and it seems only a subset of the language can be compiled to LLVM-IR through Numa though, which is a shame. If focusing on just Polyhedral optimisations though a possibility could be to write a SCoP detector for Python bytecode specifically, raise it to Polyhedral representation and then import it into LLVM's Polly tool, but this is getting ahead a bit. > > I'll be getting to grips with PyPy's codebase soon, after I'm comfortable with the fundamentals of tracing JIT's. Do you have any suggestions on where to begin specifically for what I'm looking to do? I imagine generally all this will be within the JIT optimiser, but if there's anything specific you can think of please let me know. > > Thanks. > > > Sent with ProtonMail Secure Email. > From cfbolz at gmx.de Wed Jan 20 06:33:43 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Wed, 20 Jan 2021 12:33:43 +0100 Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? In-Reply-To: <2F330ADC-910F-4787-8FE7-AA34B4F7ABC4@gmx.de> References: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> <2F330ADC-910F-4787-8FE7-AA34B4F7ABC4@gmx.de> Message-ID: On 15.01.21 07:44, Carl Friedrich Bolz-Tereick wrote: > This is not ready at all and I don't have enough time to work on it at > the moment, *however*: I have a small prototype (on the branch > map-improvements) that changes the instance layout in PyPy to store > type-stable instances with several fields that contain ints or floats > much more efficiently. It seems to give a 50% speedup on your micro > benchmark, so that's promising. There's still a bug somewhere and it > needs very careful investigation whether it costs too much on > non-numerical programs, but potentially this is a good improvement. Seems to be even more like a 90% improvement on your microbench (from 13.0 to 7.0). I also fixed the bug. Some more work is needed, but it looks relatively promising at this point. Cheers, CF From pierre.augier at univ-grenoble-alpes.fr Thu Jan 21 08:42:32 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Thu, 21 Jan 2021 14:42:32 +0100 (CET) Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? In-Reply-To: References: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> <2F330ADC-910F-4787-8FE7-AA34B4F7ABC4@gmx.de> Message-ID: <1115166851.3380572.1611236552376.JavaMail.zimbra@univ-grenoble-alpes.fr> ----- Mail original ----- > De: "Carl Friedrich Bolz-Tereick" > ?: "pypy-dev" , "PIERRE AUGIER" , "pypy-dev" > > Envoy?: Mercredi 20 Janvier 2021 12:33:43 > Objet: Re: [pypy-dev] Freelist in PyPy? Reuse short lived objects? > On 15.01.21 07:44, Carl Friedrich Bolz-Tereick wrote: >> This is not ready at all and I don't have enough time to work on it at >> the moment, *however*: I have a small prototype (on the branch >> map-improvements) that changes the instance layout in PyPy to store >> type-stable instances with several fields that contain ints or floats >> much more efficiently. It seems to give a 50% speedup on your micro >> benchmark, so that's promising. There's still a bug somewhere and it >> needs very careful investigation whether it costs too much on >> non-numerical programs, but potentially this is a good improvement. > > Seems to be even more like a 90% improvement on your microbench (from > 13.0 to 7.0). I also fixed the bug. Some more work is needed, but it > looks relatively promising at this point. Yes, it looks really promising! It could bring the pure Python implementation much closer to the C and Fortran implementations used for the benchmark in the Nature Astro paper (Zwart, 2020). Note that I'm doing some power usage measurements with serious hardware so I'll soon be able to reproduce and extend the figure shown here: https://github.com/paugier/nbabel. It means that I will soon have everything to propose a serious reply to Zwart (2020). To try your version, I guess I need to compile PyPy? And I also guess that it's only for PyPy2 first? Pierre From cfbolz at gmx.de Thu Jan 21 09:22:46 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Thu, 21 Jan 2021 15:22:46 +0100 Subject: [pypy-dev] Freelist in PyPy? Reuse short lived objects? In-Reply-To: <1115166851.3380572.1611236552376.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1919451462.10447507.1610638452697.JavaMail.zimbra@univ-grenoble-alpes.fr> <2F330ADC-910F-4787-8FE7-AA34B4F7ABC4@gmx.de> <1115166851.3380572.1611236552376.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: On 1/21/21 2:42 PM, PIERRE AUGIER wrote: > > ----- Mail original ----- >> De: "Carl Friedrich Bolz-Tereick" >> ?: "pypy-dev" , "PIERRE AUGIER" , "pypy-dev" >> >> Envoy?: Mercredi 20 Janvier 2021 12:33:43 >> Objet: Re: [pypy-dev] Freelist in PyPy? Reuse short lived objects? > >> On 15.01.21 07:44, Carl Friedrich Bolz-Tereick wrote: >>> This is not ready at all and I don't have enough time to work on it at >>> the moment, *however*: I have a small prototype (on the branch >>> map-improvements) that changes the instance layout in PyPy to store >>> type-stable instances with several fields that contain ints or floats >>> much more efficiently. It seems to give a 50% speedup on your micro >>> benchmark, so that's promising. There's still a bug somewhere and it >>> needs very careful investigation whether it costs too much on >>> non-numerical programs, but potentially this is a good improvement. >> >> Seems to be even more like a 90% improvement on your microbench (from >> 13.0 to 7.0). I also fixed the bug. Some more work is needed, but it >> looks relatively promising at this point. > > Yes, it looks really promising! It could bring the pure Python implementation much closer to the C and Fortran implementations used for the benchmark in the Nature Astro paper (Zwart, 2020). > > Note that I'm doing some power usage measurements with serious hardware so I'll soon be able to reproduce and extend the figure shown here: https://github.com/paugier/nbabel. It means that I will soon have everything to propose a serious reply to Zwart (2020). Cool! > > To try your version, I guess I need to compile PyPy? And I also guess that it's only for PyPy2 first? if you tell me your platform I can ask the buildbots to make you a binary (pypy3 works too, should not be hard to merge). Cheers, CF From yasartecer.tr at gmail.com Thu Jan 21 16:26:36 2021 From: yasartecer.tr at gmail.com (=?UTF-8?B?eWHFn2FyIHRlY2Vy?=) Date: Fri, 22 Jan 2021 00:26:36 +0300 Subject: [pypy-dev] Hi. Help pls. Embedding PyPy in C++ but tkinter gives an error Message-ID: I Embedding PyPy in C++. Everything works fine. But when I import the tkinter module, I get an error. I am compiling the code in visual studio 2019. Sample code: #include #include int main(){ int res; rpython_startup_code(); pypy_setup_home("C:\\pypy37\\",1); res = pypy_execute_source("import math ; print (math.pow(2,3)) "); pypy_init_threads(); return res; } PS C:\Users\.....\x64\Debug> ./pypyrun.exe8.0 The above code works without errors. But, the code below fails. The imported tkinter module is generating an error. #include #include int main(){ int res; rpython_startup_code(); pypy_setup_home("C:\\pypy37\\",1); res = pypy_execute_source("from tkinter import *;window = Tk();window.mainloop() "); pypy_init_threads(); return res; } The error output is as follows; PS C:\Users\.....\x64\Debug> ./pypyrun.exe debug: OperationError: debug: operror-type: IndexError debug: operror-value: list index out of range -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.augier at univ-grenoble-alpes.fr Mon Jan 25 14:44:27 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Mon, 25 Jan 2021 20:44:27 +0100 (CET) Subject: [pypy-dev] Results NBabel benchmark CO2 production versus time: good new for PyPy map-improvements Message-ID: <1448582732.1929701.1611603867125.JavaMail.zimbra@univ-grenoble-alpes.fr> Hi, I did some timing and energy consumption measurements with https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial I think the results tend to validate the approach used in the branch map-improvements (https://foss.heptapod.net/pypy/pypy/-/tree/branch/map-improvements). I attach one of the first figure including a run using an interpreter build with these changes (http://buildbot.pypy.org/nightly/map-improvements-3.7/pypy-c-jit-latest-linux64.tar.bz2). To be compared with https://raw.githubusercontent.com/paugier/nbabel/master/py/fig/fig_ecolo_impact_transonic.png taken from Zwart (2020). The implementation run with PyPy map-improvements is faster than the implementations using Numba, Fortran and C++ (with flags like -Ofast and -march=native activated!) ! It's a great result! Congratulation! For people interested, the code for the benchmarks and the measurement is here https://github.com/paugier/nbabel Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: fig_bench_nbabel.png Type: image/png Size: 46257 bytes Desc: not available URL: From cfbolz at gmx.de Tue Jan 26 03:10:20 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Tue, 26 Jan 2021 09:10:20 +0100 Subject: [pypy-dev] Results NBabel benchmark CO2 production versus time: good new for PyPy map-improvements In-Reply-To: <1448582732.1929701.1611603867125.JavaMail.zimbra@univ-grenoble-alpes.fr> References: <1448582732.1929701.1611603867125.JavaMail.zimbra@univ-grenoble-alpes.fr> Message-ID: <19c6437b-91ee-b900-2014-616a5bd99157@gmx.de> Hi Pierre, wow, those numbers are quite something! I suppose the C++ code could be optimized some more? Do you plan to submit that soon? Would it make your story easier if I tried to push ahead with getting map-improvements merged? Also, would you maybe be interested in (co-?)writing a blog post for the PyPy blog?: https://morepypy.blogspot.com/ Cheers, Carl Friedrich On 1/25/21 8:44 PM, PIERRE AUGIER wrote: > Hi, > > I did some timing and energy consumption measurements with https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial > > I think the results tend to validate the approach used in the branch map-improvements (https://foss.heptapod.net/pypy/pypy/-/tree/branch/map-improvements). I attach one of the first figure including a run using an interpreter build with these changes (http://buildbot.pypy.org/nightly/map-improvements-3.7/pypy-c-jit-latest-linux64.tar.bz2). > > To be compared with https://raw.githubusercontent.com/paugier/nbabel/master/py/fig/fig_ecolo_impact_transonic.png taken from Zwart (2020). > > The implementation run with PyPy map-improvements is faster than the implementations using Numba, Fortran and C++ (with flags like -Ofast and -march=native activated!) ! It's a great result! Congratulation! > > For people interested, the code for the benchmarks and the measurement is here https://github.com/paugier/nbabel > > Pierre > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From oliver.margetts at gmail.com Tue Jan 26 12:07:02 2021 From: oliver.margetts at gmail.com (Oliver Margetts) Date: Tue, 26 Jan 2021 17:07:02 +0000 Subject: [pypy-dev] Some web benchmarks Message-ID: Hello all, not sure where to post this, so I'll put it here. We recently wrote a blog post where we benchmarked some python web frameworks: https://suade.org/dev/12-requests-per-second-with-python/ One of the big conclusions was: if you want faster code, use pypy! I was pleasantly surprised as I was under the impression that web wasn't pypy's strongest area (I'm not sure where from). I wondered if you would consider adding something as simple as a "Hello, World" benchmark to speed.pypy.org? Aside from that, thanks for all of your efforts on making pypy fast! -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Jan 26 12:57:31 2021 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 26 Jan 2021 19:57:31 +0200 Subject: [pypy-dev] Some web benchmarks In-Reply-To: References: Message-ID: <34682199-d015-6e84-f2d4-ff6a5fdb36b7@gmail.com> On 1/26/21 7:07 PM, Oliver Margetts wrote: > Hello all, > > not sure where to post this, so I'll put it here. We recently wrote a > blog post where we benchmarked some python web frameworks: > https://suade.org/dev/12-requests-per-second-with-python/ > > One of the big conclusions was: if you want faster code, use pypy! I > was pleasantly surprised as I was under the impression that web wasn't > pypy's strongest area (I'm not sure where from). I wondered if you > would consider adding something as simple as a "Hello, World" > benchmark to speed.pypy.org ? > > Aside from that, thanks for all of your efforts on making pypy fast! > Thanks!. I liked the post. We have a simple django benchmark in the suite which shows PyPy to be about 4x faster for templating[0], I don't think it actually sets up for answering requests. There is a need for more complete site benchmarking like in your post. You may be interested in a deep dive[1] into profiling one django app with PyPy. Matti [0] https://foss.heptapod.net/pypy/benchmarks/-/blob/branch/default/unladen_swallow/performance/bm_django.py#L34 [1] https://lincolnloop.com/blog/faster-django-sites-pypy/ From oliver.margetts at gmail.com Tue Jan 26 14:00:51 2021 From: oliver.margetts at gmail.com (Oliver Margetts) Date: Tue, 26 Jan 2021 19:00:51 +0000 Subject: [pypy-dev] Some web benchmarks In-Reply-To: <34682199-d015-6e84-f2d4-ff6a5fdb36b7@gmail.com> References: <34682199-d015-6e84-f2d4-ff6a5fdb36b7@gmail.com> Message-ID: Thanks! Not sure how I missed the Django templating one - probably my unconscious Flask bias ;) On Tue, 26 Jan 2021 at 17:57, Matti Picus wrote: > On 1/26/21 7:07 PM, Oliver Margetts wrote: > > > Hello all, > > > > not sure where to post this, so I'll put it here. We recently wrote a > > blog post where we benchmarked some python web frameworks: > > https://suade.org/dev/12-requests-per-second-with-python/ > > > > One of the big conclusions was: if you want faster code, use pypy! I > > was pleasantly surprised as I was under the impression that web wasn't > > pypy's strongest area (I'm not sure where from). I wondered if you > > would consider adding something as simple as a "Hello, World" > > benchmark to speed.pypy.org ? > > > > Aside from that, thanks for all of your efforts on making pypy fast! > > > > Thanks!. I liked the post. We have a simple django benchmark in the > suite which shows PyPy to be about 4x faster for templating[0], I don't > think it actually sets up for answering requests. There is a need for > more complete site benchmarking like in your post. > > > You may be interested in a deep dive[1] into profiling one django app > with PyPy. > > > Matti > > > [0] > > https://foss.heptapod.net/pypy/benchmarks/-/blob/branch/default/unladen_swallow/performance/bm_django.py#L34 > > [1] https://lincolnloop.com/blog/faster-django-sites-pypy/ > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.augier at univ-grenoble-alpes.fr Tue Jan 26 15:07:29 2021 From: pierre.augier at univ-grenoble-alpes.fr (PIERRE AUGIER) Date: Tue, 26 Jan 2021 21:07:29 +0100 (CET) Subject: [pypy-dev] Results NBabel benchmark CO2 production versus time: good new for PyPy map-improvements In-Reply-To: <19c6437b-91ee-b900-2014-616a5bd99157@gmx.de> References: <1448582732.1929701.1611603867125.JavaMail.zimbra@univ-grenoble-alpes.fr> <19c6437b-91ee-b900-2014-616a5bd99157@gmx.de> Message-ID: <157616271.2679280.1611691649629.JavaMail.zimbra@univ-grenoble-alpes.fr> ----- Mail original ----- > De: "Carl Friedrich Bolz-Tereick" > ?: "PIERRE AUGIER" , "pypy-dev" > Envoy?: Mardi 26 Janvier 2021 09:10:20 > Objet: Re: [pypy-dev] Results NBabel benchmark CO2 production versus time: good new for PyPy map-improvements > Hi Pierre, > > wow, those numbers are quite something! I suppose the C++ code could be > optimized some more? Yes, of course! The C++ code is really not great. Even the Fortran code could easily be optimized a bit more. However, I think they are representative of many C++ and Fortran codes written by scientists. > Do you plan to submit that soon? Would it make your story easier if I > tried to push ahead with getting map-improvements merged? I do not plan to submit that very soon. However, I plan to contact the editors very soon. My point of view is the following: I don't think it would be honest to include in a paper the point using map-improvements before this branch is merged so I would rather wait for it... > Also, would you maybe be interested in (co-?)writing a blog post for the > PyPy blog?: https://morepypy.blogspot.com/ Yes, I can really help on such blog post! But I'm definitively not the right guy to explain the principle of map-improvements! Also, for me, the possible comment in Nature Astronomy has a higher priority. > Cheers, > > Carl Friedrich > > On 1/25/21 8:44 PM, PIERRE AUGIER wrote: >> Hi, >> >> I did some timing and energy consumption measurements with >> https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial >> >> I think the results tend to validate the approach used in the branch >> map-improvements >> (https://foss.heptapod.net/pypy/pypy/-/tree/branch/map-improvements). I attach >> one of the first figure including a run using an interpreter build with these >> changes >> (http://buildbot.pypy.org/nightly/map-improvements-3.7/pypy-c-jit-latest-linux64.tar.bz2). >> >> To be compared with >> https://raw.githubusercontent.com/paugier/nbabel/master/py/fig/fig_ecolo_impact_transonic.png >> taken from Zwart (2020). >> >> The implementation run with PyPy map-improvements is faster than the >> implementations using Numba, Fortran and C++ (with flags like -Ofast and >> -march=native activated!) ! It's a great result! Congratulation! >> >> For people interested, the code for the benchmarks and the measurement is here >> https://github.com/paugier/nbabel >> >> Pierre >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev From cfbolz at gmx.de Tue Jan 26 15:26:29 2021 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Tue, 26 Jan 2021 21:26:29 +0100 Subject: [pypy-dev] Some web benchmarks In-Reply-To: References: <34682199-d015-6e84-f2d4-ff6a5fdb36b7@gmail.com> Message-ID: On 1/26/21 8:00 PM, Oliver Margetts wrote: > Thanks! Not sure how I missed the Django templating one - probably my > unconscious Flask bias ;) There are also two sqlalchemy benchmarks, where are actually a bit slower than CPython. I don't know exactly what they measure though. Maybe an end to end test of some form would be an interesting addition? Cheers, CF