From fijall at gmail.com Wed May 1 00:13:24 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 1 May 2013 00:13:24 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: This is a kind of example where our GC card marking does not quite work. I think the improve-rdict branch should improve this kind of code quite a bit (but I still have to finish it) On Tue, Apr 30, 2013 at 6:51 PM, Armin Rigo wrote: > Hi, > > On Tue, Apr 30, 2013 at 5:26 AM, cat street wrote: >> You can test this code: >> (...) > > For no good reason it seems that on this example CPython is quite a > bit faster on Linux64 than on Linux32. PyPy is also a bit faster on > Linux64 but not by such a large margin. In my tests (PyPy vs CPython) > it ends up the same on Linux32, and on Linux64 PyPy is a bit slower > (20%?). I think it's good enough given the type of code (completely > unoptimizable as far as I can tell, unless we go for "we can kill the > whole loop in this benchmark", which is usually a bit pointless in > real code). If others want to look in detail at JIT traces, feel free > to. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From alex.gaynor at gmail.com Wed May 1 00:24:59 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 30 Apr 2013 15:24:59 -0700 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: I don't think this is a GC case. I think this is a case of loops with only a few iterations aren't fast enough. Alex On Tue, Apr 30, 2013 at 3:13 PM, Maciej Fijalkowski wrote: > This is a kind of example where our GC card marking does not quite > work. I think the improve-rdict branch should improve this kind of > code quite a bit (but I still have to finish it) > > On Tue, Apr 30, 2013 at 6:51 PM, Armin Rigo wrote: > > Hi, > > > > On Tue, Apr 30, 2013 at 5:26 AM, cat street wrote: > >> You can test this code: > >> (...) > > > > For no good reason it seems that on this example CPython is quite a > > bit faster on Linux64 than on Linux32. PyPy is also a bit faster on > > Linux64 but not by such a large margin. In my tests (PyPy vs CPython) > > it ends up the same on Linux32, and on Linux64 PyPy is a bit slower > > (20%?). I think it's good enough given the type of code (completely > > unoptimizable as far as I can tell, unless we go for "we can kill the > > whole loop in this benchmark", which is usually a bit pointless in > > real code). If others want to look in detail at JIT traces, feel free > > to. > > > > > > A bient?t, > > > > Armin. > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed May 1 00:18:44 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 00:18:44 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: Hi Fijal, On Wed, May 1, 2013 at 12:13 AM, Maciej Fijalkowski wrote: > This is a kind of example where our GC card marking does not quite > work. No, not in this case. It only builds dicts and lists with 10 elements and forgets them immediately. A bient?t, Armin. From eliswilson at hushmail.com Wed May 1 01:21:23 2013 From: eliswilson at hushmail.com (eliswilson at hushmail.com) Date: Tue, 30 Apr 2013 19:21:23 -0400 Subject: [pypy-dev] Biggest Fake Conference in Computer Science Message-ID: <20130430232123.67567E6736@smtp.hushmail.com> Biggest Fake Conference in Computer Science We are researchers from different parts of the world and conducted a study on the world?s biggest bogus computer science conference WORLDCOMP http://sites.google.com/site/worlddump1 organized by Prof. Hamid Arabnia from University of Georgia, USA. We submitted a fake paper to WORLDCOMP 2011 and again (the same paper with a modified title) to WORLDCOMP 2012. This paper had numerous fundamental mistakes. Sample statements from that paper include: (1). Binary logic is fuzzy logic and vice versa (2). Pascal developed fuzzy logic (3). Object oriented languages do not exhibit any polymorphism or inheritance (4). TCP and IP are synonyms and are part of OSI model (5). Distributed systems deal with only one computer (6). Laptop is an example for a super computer (7). Operating system is an example for computer hardware Also, our paper did not express any conceptual meaning. However, it was accepted both the times without any modifications (and without any reviews) and we were invited to submit the final paper and a payment of $500+ fee to present the paper. We decided to use the fee for better purposes than making Prof. Hamid Arabnia richer. After that, we received few reminders from WORLDCOMP to pay the fee but we never responded. This fake paper is different from the two fake papers already published (see https://sites.google.com/site/worlddump4 for details) in WORLDCOMP. We MUST say that you should look at the above website if you have any thoughts of participating in WORLDCOMP. DBLP and other indexing agencies have stopped indexing WORLDCOMP?s proceedings since 2011 due to its fakeness. See http://www.informatik.uni-trier.de/~ley/db/conf/icai/index.html for of one of the conferences of WORLDCOMP and notice that there is no listing after 2010. See Section 2 of http://sites.google.com/site/dumpconf for comments from well-known researchers about WORLDCOMP. The status of your WORLDCOMP papers can be changed from scientific to other (i.e., junk or non-technical) at any time. Better not to have a paper than having it in WORLDCOMP and spoil the resume and peace of mind forever! Our study revealed that WORLDCOMP is money making business, using University of Georgia mask, for Prof. Hamid Arabnia. He is throwing out a small chunk of that money (around 20 dollars per paper published in WORLDCOMP?s proceedings) to his puppet (Mr. Ashu Solo or A.M.G. Solo) who publicizes WORLDCOMP and also defends it at various forums, using fake/anonymous names. The puppet uses fake names and defames other conferences to divert traffic to WORLDCOMP. He also makes anonymous phone calls and threatens the critiques of WORLDCOMP (See Item 7 of Section 5 of above website). That is, the puppet does all his best to get a maximum number of papers published at WORLDCOMP to get more money into his (and Prof. Hamid Arabnia?s) pockets. Prof. Hamid Arabnia makes a lot of tricks. For example, he appeared in a newspaper to fool the public, claiming him a victim of cyber-attack (see Item 8 in Section 5 of above website). Monte Carlo Resort (the venue of WORLDCOMP for more than 10 years, until 2012) has refused to provide the venue for WORLDCOMP?13 because of the fears of their image being tarnished due to WORLDCOMP?s fraudulent activities. That is why WORLDCOMP?13 is taking place at a different resort. WORLDCOMP will not be held after 2013. The draft paper submission deadline is over but still there are no committee members, no reviewers, and there is no conference Chairman. The only contact details available on WORLDCOMP?s website is just an email address! We ask Prof. Hamid Arabnia to publish all reviews for all the papers (after blocking identifiable details) since 2000 conference. Reveal the names and affiliations of all the reviewers (for each year) and how many papers each reviewer had reviewed on average. We also ask him to look at the Open Challenge (Section 6) at https://sites.google.com/site/moneycomp1 and respond if he has any professional values. Sorry for posting to multiple lists. Spreading the word is the only way to stop this bogus conference. Please forward this message to other mailing lists and people. We are shocked with Prof. Hamid Arabnia and his puppet?s activities at http://worldcomp-fake-bogus.blogspot.com Search Google using the keyword worldcomp fake for additional links. From micahel at gmail.com Wed May 1 03:54:39 2013 From: micahel at gmail.com (Michael Hudson-Doyle) Date: Wed, 1 May 2013 13:54:39 +1200 Subject: [pypy-dev] hexiom2 benchmark Message-ID: Hi all, Long time no see! Apologies if this is not the right place to ask this question. I'm trying to run the benchmarks from speed.pypy.org on my system (just with cpython for now -- there is a chance that Linaro will be doing some work on the performance of Python on ARM at some point), but when I check out https://bitbucket.org/pypy/benchmarks/overview and run "./runner.py --fast" I get this: Running hexiom2... INFO:root:Running /usr/bin/python /home/mwhudson/src/benchmarks/own/hexiom2.py -n 5 INFO:root:Running /usr/bin/python /home/mwhudson/src/benchmarks/own/hexiom2.py -n 5 Traceback (most recent call last): File "./runner.py", line 302, in main(sys.argv[1:]) File "./runner.py", line 283, in main full_store=full_store, branch=branch) File "./runner.py", line 41, in run_and_store results = perf.main(opts, funcs) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 1617, in main bench_result = func(base_cmd_prefix, changed_cmd_prefix, options) File "/home/mwhudson/src/benchmarks/benchmarks.py", line 17, in BM return SimpleBenchmark(Measure, *args, **kwds) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 462, in SimpleBenchmark return CompareBenchmarkData(base_data, changed_data, options) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 736, in CompareBenchmarkData return CompareMultipleRuns(base_times, changed_times, options) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 704, in CompareMultipleRuns significant, t_score = IsSignificant(base_times, changed_times) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 189, in IsSignificant t_score = TScore(sample1, sample2) File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 170, in TScore return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2) ZeroDivisionError: float division by zero is this just an artifact of running --fast? It seems the code could probably do with being a bit more robust? Cheers, mwh -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed May 1 10:42:40 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 10:42:40 +0200 Subject: [pypy-dev] GSoc deadline for students -> 3 May, 12:00 (US Pacific) Message-ID: Hi all, This is a reminder: for students that would like to enroll under the GSoC program for this summer, the deadline to push your proposal is Friday the 3rd of May, at 12:00 U.S. Pacific. For your own sake, push your proposal at least a couple of days earlier! You can always change it in any way you want (or say you retract it) until the deadline. http://www.google-melange.com/ A bient?t, Armin. From arigo at tunes.org Wed May 1 10:56:48 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 10:56:48 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: Hi Alex, On Wed, May 1, 2013 at 12:24 AM, Alex Gaynor wrote: > I don't think this is a GC case. I think this is a case of loops with only a > few iterations aren't fast enough. Dudes, can anyone look seriously at the benchmark? :-) The core of this benchmark is a loop that does 1'000'000 times "dict(zip(keys, vals))", where keys and vals are lists of length 10. A bient?t, Armin. From arigo at tunes.org Wed May 1 10:53:32 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 10:53:32 +0200 Subject: [pypy-dev] GSoc deadline for students -> 3 May, 12:00 (US Pacific) In-Reply-To: References: Message-ID: Re-hi, Also, please remember to write "pypy" in your proposal title. If you don't, your application might get lost in the mass of applications. A bient?t, Armin. From arigo at tunes.org Wed May 1 11:16:40 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 11:16:40 +0200 Subject: [pypy-dev] hexiom2 benchmark In-Reply-To: References: Message-ID: Hi Michael, On Wed, May 1, 2013 at 3:54 AM, Michael Hudson-Doyle wrote: > File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 170, in > TScore > return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2) > ZeroDivisionError: float division by zero The code could be more robust indeed :-( It's because hexiom2 runs only once in --fast mode, so it ends up with error == exactly 0.0. Fixed in f7abffc04667. A bient?t, Armin. From fijall at gmail.com Wed May 1 11:47:29 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 1 May 2013 11:47:29 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: On Wed, May 1, 2013 at 10:56 AM, Armin Rigo wrote: > Hi Alex, > > On Wed, May 1, 2013 at 12:24 AM, Alex Gaynor wrote: >> I don't think this is a GC case. I think this is a case of loops with only a >> few iterations aren't fast enough. > > Dudes, can anyone look seriously at the benchmark? :-) > > The core of this benchmark is a loop that does 1'000'000 times > "dict(zip(keys, vals))", where keys and vals are lists of length 10. oops, indeed, I looked but then I swapped numbers in my mind From fijall at gmail.com Wed May 1 11:48:58 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 1 May 2013 11:48:58 +0200 Subject: [pypy-dev] hexiom2 benchmark In-Reply-To: References: Message-ID: On Wed, May 1, 2013 at 11:16 AM, Armin Rigo wrote: > Hi Michael, > > On Wed, May 1, 2013 at 3:54 AM, Michael Hudson-Doyle wrote: >> File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line 170, in >> TScore >> return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2) >> ZeroDivisionError: float division by zero > > The code could be more robust indeed :-( It's because hexiom2 runs > only once in --fast mode, so it ends up with error == exactly 0.0. > Fixed in f7abffc04667. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Btw, this is a benchmark run (without --fast) summary: http://paste.pound-python.org/show/32751/ It seems CPython on ARM is kinda bad (despite the fact that our assembler is bad too) From alex.gaynor at gmail.com Wed May 1 16:08:55 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Wed, 1 May 2013 07:08:55 -0700 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: I read the benchmark, it's the loop inside of `zip()` which has very few iterations. Alex On Wed, May 1, 2013 at 1:56 AM, Armin Rigo wrote: > Hi Alex, > > On Wed, May 1, 2013 at 12:24 AM, Alex Gaynor > wrote: > > I don't think this is a GC case. I think this is a case of loops with > only a > > few iterations aren't fast enough. > > Dudes, can anyone look seriously at the benchmark? :-) > > The core of this benchmark is a loop that does 1'000'000 times > "dict(zip(keys, vals))", where keys and vals are lists of length 10. > > > A bient?t, > > Armin. > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed May 1 19:19:38 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 19:19:38 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: Hi Alex, On Wed, May 1, 2013 at 4:08 PM, Alex Gaynor wrote: > I read the benchmark, it's the loop inside of `zip()` which has very few > iterations. Ah oh. Sorry. I forgot that zip() is implemented at app-level. Could it be helpful to have a faster version of zip() specialized to two arguments? It would avoid the loop of length 2 that we do for each pair of items. A bient?t, Armin. From alex.gaynor at gmail.com Wed May 1 19:22:15 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Wed, 1 May 2013 10:22:15 -0700 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: Yes, we have a specialized map for 2 arguments, a specialized zip makes sense. (Or figuring out how to specialize that loop for N-arguments where N is ~smallish so the inner loop is unrolled at app level, that's harder, but probably worthwhile n the long run). Alex On Wed, May 1, 2013 at 10:19 AM, Armin Rigo wrote: > Hi Alex, > > On Wed, May 1, 2013 at 4:08 PM, Alex Gaynor wrote: > > I read the benchmark, it's the loop inside of `zip()` which has very few > > iterations. > > Ah oh. Sorry. I forgot that zip() is implemented at app-level. > > Could it be helpful to have a faster version of zip() specialized to > two arguments? It would avoid the loop of length 2 that we do for > each pair of items. > > > A bient?t, > > Armin. > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed May 1 19:35:13 2013 From: arigo at tunes.org (Armin Rigo) Date: Wed, 1 May 2013 19:35:13 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: Re-Hi, On Wed, May 1, 2013 at 7:19 PM, Armin Rigo wrote: > Could it be helpful to have a faster version of zip() specialized to > two arguments? It would avoid the loop of length 2 that we do for > each pair of items. Done in ffe6fdf3a875. The zip() function is now apparently more than 4 times faster when called with two smallish lists :-) Thanks cat street for the original report. Your benchmark is more than 2 times faster now (the dict() is still taking the same time). A bient?t, Armin. From micahel at gmail.com Wed May 1 23:00:18 2013 From: micahel at gmail.com (Michael Hudson-Doyle) Date: Thu, 2 May 2013 09:00:18 +1200 Subject: [pypy-dev] hexiom2 benchmark In-Reply-To: References: Message-ID: On 1 May 2013 21:48, Maciej Fijalkowski wrote: > On Wed, May 1, 2013 at 11:16 AM, Armin Rigo wrote: > > Hi Michael, > > > > On Wed, May 1, 2013 at 3:54 AM, Michael Hudson-Doyle > wrote: > >> File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line > 170, in > >> TScore > >> return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2) > >> ZeroDivisionError: float division by zero > > > > The code could be more robust indeed :-( It's because hexiom2 runs > > only once in --fast mode, so it ends up with error == exactly 0.0. > > Fixed in f7abffc04667. > > > > > > A bient?t, > > > > Armin. > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > Btw, this is a benchmark run (without --fast) summary: > > http://paste.pound-python.org/show/32751/ > Thanks for that. > It seems CPython on ARM is kinda bad (despite the fact that our > assembler is bad too) > Yeah. Maybe that's something we (Linaro) can do something about... Cheers, mwh -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Thu May 2 09:59:06 2013 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 02 May 2013 09:59:06 +0200 Subject: [pypy-dev] Pypy is slower than Python In-Reply-To: References: Message-ID: <51821CCA.7080108@gmail.com> On 05/01/2013 07:22 PM, Alex Gaynor wrote: > Yes, we have a specialized map for 2 arguments, a specialized zip makes sense. > (Or figuring out how to specialize that loop for N-arguments where N is > ~smallish so the inner loop is unrolled at app level, that's harder, but > probably worthwhile n the long run). In general, it'd be very useful to have a way to say the equivalent of @unroll_safe at applevel, although then it could be used very badly if you don't know exactly what you are doing. I think that cfbolz once started a branch to give hints from applevel, but then he never finished. Is that correct? ciao, Anto From micahel at gmail.com Thu May 2 09:59:50 2013 From: micahel at gmail.com (Michael Hudson-Doyle) Date: Thu, 2 May 2013 19:59:50 +1200 Subject: [pypy-dev] hexiom2 benchmark In-Reply-To: References: Message-ID: [apologies for the off-list mail] Thanks. For another kind of robustness, is it really necessary to throw away all the benchmark results when one (translate) fails? My run without --fast just fell over after well over a day of running :( Cheers, mwh On 1 May 2013 21:16, Armin Rigo wrote: > Hi Michael, > > On Wed, May 1, 2013 at 3:54 AM, Michael Hudson-Doyle > wrote: > > File "/home/mwhudson/src/benchmarks/unladen_swallow/perf.py", line > 170, in > > TScore > > return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2) > > ZeroDivisionError: float division by zero > > The code could be more robust indeed :-( It's because hexiom2 runs > only once in --fast mode, so it ends up with error == exactly 0.0. > Fixed in f7abffc04667. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hakan at debian.org Thu May 2 11:52:10 2013 From: hakan at debian.org (Hakan Ardo) Date: Thu, 2 May 2013 11:52:10 +0200 Subject: [pypy-dev] Second compilation stage Message-ID: Hi, there have been a bit of talk on improving loops with control flow, i.e. when there is no single dominant path through the loop. To have that discussion here let me make the following proposal (based on a proposal from Armin): How about aiming for something that starts out with running the current JIT with the unrolling disabled. That will produce a graph of traces which we could partition into one subgraph per loop. When we believe all guards are either traced or never failing, we use such subgraphs as preambles. That is, we copy the full graph to producing a peeled loop graph and optimized it. We focus the optimization on removing the remaining guards as they (hopefully in most cases) are loop invariant. Then we have a guard free graph of traces that we could pass to gcc/llvm and the resulting machine code are attached after the preamble. If it turns out to be necessary, we could later add support for handing guards to better support cases where we are unable optimize them out. -- H?kan Ard? From arigo at tunes.org Thu May 2 15:28:18 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 2 May 2013 15:28:18 +0200 Subject: [pypy-dev] Second compilation stage In-Reply-To: References: Message-ID: Hi Hakan, On Thu, May 2, 2013 at 11:52 AM, Hakan Ardo wrote: > there have been a bit of talk on improving loops with control flow, > i.e. when there is no single dominant path through the loop. To have > that discussion here let me make the following proposal (based on a > proposal from Armin): I proposed nothing more or less than sending our loops to gcc/llvm. You're suggesting some refactorings of the way loops are produced, which may make sense or not, but is more than I ever claim to have proposed :-) > If it turns out to be necessary, we could later add support for > handing guards to better support cases where we are unable optimize > them out. One quick word, I doubt very strongly that we can hope to remove *all* guards. Starting from that assumption looks wrong to me. In my opinion we'd be left with a large quantity of guards anyway --- mostly guards that never failed so far and probably never will, but which cannot be proven never to fail. (They would fail in some rare circumstances like suddenly starting pdb, or some integer computation overflowing; or, for most of them, you'd need some impossibly clever global "prover" to check e.g. that this list here cannot be empty or can only contain objects of this precise type.) A bient?t, Armin. From hakan at debian.org Thu May 2 21:29:37 2013 From: hakan at debian.org (Hakan Ardo) Date: Thu, 2 May 2013 21:29:37 +0200 Subject: [pypy-dev] Second compilation stage In-Reply-To: References: Message-ID: On Thu, May 2, 2013 at 3:28 PM, Armin Rigo wrote: > Hi Hakan, > > On Thu, May 2, 2013 at 11:52 AM, Hakan Ardo wrote: >> there have been a bit of talk on improving loops with control flow, >> i.e. when there is no single dominant path through the loop. To have >> that discussion here let me make the following proposal (based on a >> proposal from Armin): > > I proposed nothing more or less than sending our loops to gcc/llvm. > You're suggesting some refactorings of the way loops are produced, > which may make sense or not, but is more than I ever claim to have > proposed :-) Right :) Sorry for being unclear.. > >> If it turns out to be necessary, we could later add support for >> handing guards to better support cases where we are unable optimize >> them out. > > One quick word, I doubt very strongly that we can hope to remove *all* > guards. Starting from that assumption looks wrong to me. In my > opinion we'd be left with a large quantity of guards anyway --- mostly > guards that never failed so far and probably never will, but which > cannot be proven never to fail. (They would fail in some rare > circumstances like suddenly starting pdb, or some integer computation > overflowing; or, for most of them, you'd need some impossibly clever > global "prover" to check e.g. that this list here cannot be empty or > can only contain objects of this precise type.) Your right of course. I pulled some statistics out of our benchmarks by counting the number of ops, guards and not yet traced guards for the three sections defined by the labels in our loops. That would be some setup section, the preamble and the peeled loop. This should give an indication of how good our current unrolling is at getting the never failing guards our of the loop, and it's not very good at it: http://32c60f1d49dbd76e.paste.se/ So we need to support guards from the start. -- H?kan Ard? From haoyi.sg at gmail.com Sun May 5 00:32:11 2013 From: haoyi.sg at gmail.com (haoyi.sg at gmail.com) Date: Sat, 4 May 2013 22:32:11 +0000 Subject: [pypy-dev] =?utf-8?q?Newbie_question=3A_using_PyPy_to_compile_the?= =?utf-8?q?_source_of_a_single_function=3F?= Message-ID: <51858c70.8387e00a.7961.141e@mx.google.com> I?m looking for some way of programmatically using PyPy to compile a snippet of python source code (probably a function def) into an optimized binary, which I can call to pass data back and forth. The end goal is to have something like this @PyPy def expensive_function(arg): ... expensive computation ... return result using macros (https://github.com/lihaoyi/macropy) to perform this conversion at import time. I have no idea if this is possible or not; could anyone here give me any pointers or advice how to do this/why it is impossible? Thanks! -Haoyi -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Sun May 5 00:52:18 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 5 May 2013 05:22:18 +0630 Subject: [pypy-dev] Newbie question: using PyPy to compile the source of a single function? In-Reply-To: <51858c70.8387e00a.7961.141e@mx.google.com> References: <51858c70.8387e00a.7961.141e@mx.google.com> Message-ID: Pypy is JIT. if you want to do such thing you better look for Nuitika and Cython. On Sun, May 5, 2013 at 5:02 AM, wrote: > I?m looking for some way of programmatically using PyPy to compile a > snippet of python source code (probably a function def) into an optimized > binary, which I can call to pass data back and forth. The end goal is to > have something like this > > @PyPy > def expensive_function(arg): > ... expensive computation ... > return result > > using macros (https://github.com/lihaoyi/macropy) to perform this > conversion at import time. > > I have no idea if this is possible or not; could anyone here give me any > pointers or advice how to do this/why it is impossible? > > Thanks! > -Haoyi > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haoyi.sg at gmail.com Sun May 5 01:41:33 2013 From: haoyi.sg at gmail.com (Haoyi Li) Date: Sat, 4 May 2013 16:41:33 -0700 Subject: [pypy-dev] Newbie question: using PyPy to compile the source Message-ID: <2186514503173654912@unknownmsgid> of a single function? MIME-Version: 1.0 Content-Type: multipart/alternative; boundary=001a11c3715035453704dbec5495 --001a11c3715035453704dbec5495 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Doesn't PyPy have a "compile to binary" option? I thought it did, but I may be mistaken. Sent from my Windows Phone From: Phyo Arkar Sent: 5/4/2013 6:52 PM To: haoyi.sg at gmail.com Cc: pypy-dev at python.org Subject: Re: [pypy-dev] Newbie question: using PyPy to compile the source of a single function? Pypy is JIT. if you want to do such thing you better look for Nuitika and Cython. On Sun, May 5, 2013 at 5:02 AM, wrote: > I=E2=80=99m looking for some way of programmatically using PyPy to compil= e a > snippet of python source code (probably a function def) into an optimized > binary, which I can call to pass data back and forth. The end goal is to > have something like this > > @PyPy > def expensive_function(arg): > ... expensive computation ... > return result > > using macros (https://github.com/lihaoyi/macropy) to perform this > conversion at import time. > > I have no idea if this is possible or not; could anyone here give me any > pointers or advice how to do this/why it is impossible? > > Thanks! > -Haoyi > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > --001a11c3715035453704dbec5495 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable
Doesn't PyPy have a "compile to binary" option? I thought= it did, but I may be mistaken.

Sent from my Windows Phone
=

From: Phyo Arkar
Sent: 5/4/2013 6:52 PM
To: haoyi.sg at gmail.com
Cc: pypy-dev at python.org=
Subject: Re: [pypy-dev] Newbie question: using PyPy to com= pile the source of a single function?

Pypy is JIT. if you want to do such thing you better look for Nuit= ika and Cython.


On Sun, May 5, 2013 at 5:02 AM, <haoyi.sg at gmail.com> wrote:
I=E2=80=99m looking for some way of programmatically using PyPy to com= pile a snippet of python source code (probably a function def) into an opti= mized binary, which I can call to pass data back and forth. The end goal is= to have something like this
=C2=A0
@PyPy
def expensive_function(arg):
=C2=A0=C2=A0=C2=A0=C2=A0... expensive computation ...
=C2=A0=C2= =A0=C2=A0 return result
=C2=A0
using macros (https://github.com/l= ihaoyi/macropy) to perform this conversion at import time.
=C2=A0
I have no idea if this is possible or not; could anyo= ne here give me any pointers or advice how to do this/why it is impossible?=
=C2=A0
Thanks!
-Haoyi

_= ______________________________________________
pypy-dev mailing list
pypy-dev at python.org
http://mail.python.org/mailman/listinfo/pypy-dev


--001a11c3715035453704dbec5495-- From fijall at gmail.com Sun May 5 11:12:43 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 5 May 2013 11:12:43 +0200 Subject: [pypy-dev] Newbie question: using PyPy to compile the source In-Reply-To: <2186514503173654912@unknownmsgid> References: <2186514503173654912@unknownmsgid> Message-ID: > Doesn't PyPy have a "compile to binary" option? I thought it did, but I > may be mistaken. It does not. From arigo at tunes.org Sun May 5 11:28:07 2013 From: arigo at tunes.org (Armin Rigo) Date: Sun, 5 May 2013 11:28:07 +0200 Subject: [pypy-dev] Newbie question: using PyPy to compile the source In-Reply-To: References: <2186514503173654912@unknownmsgid> Message-ID: Hi, On Sun, May 5, 2013 at 11:12 AM, Maciej Fijalkowski wrote: >> Doesn't PyPy have a "compile to binary" option? I thought it did, but I >> may be mistaken. > > It does not. It does, in a way, but it's not an option available for the user on a single function at a time. The confusion comes from the RPython translator toolchain. This is how we produce PyPy, by translating into a binary the *whole* complete source code of PyPy (which is written in RPython, not in Python). There is no support to compile a single RPython function at a time --- or rather, there is, for our own testing purposes, but there is no reasonable way to integrate the result with the rest of your running Python code. As others have pointed out, you should not have to worry about it anyway because we have a JIT for full Python code. A bient?t, Armin. From arigo at tunes.org Sun May 5 11:59:44 2013 From: arigo at tunes.org (Armin Rigo) Date: Sun, 5 May 2013 11:59:44 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) Message-ID: Hi all, I'm just wondering again about some "bug" reports that are not bugs, about people misusing "is" to compare two immutable objects. The current situation in PyPy is that "is" works like "==" for ints, longs, floats or complexes. It does not for strs or unicodes or tuples. Now of course someone on python-dev was (indirectly) complaining that you can compare in CPython ``x is ' '``, which works because single-character strings are cached, but not in PyPy. I'm sure someone else has been bitten by writing in CPython ``x is ()``, which is also cached there. (Fwiw I think that there is a design flaw somewhere in Python, to allow "1 is 1" to be executed without any error but also without any well-defined result...) Can we fix it once and for all? It's annoying because of id: if we want ``x is y`` for equal huge strings x and y, but still want ``id(x)==id(y)``, then we have to compute ``id(some_string)`` in a rather slow way, producing a huge number. The same for tuples: if we always want ``(1, 2) is (1, 2)`` then we need to compute ``id(some_tuple)`` recursively, which can also lead to huge numbers. In fact such a definition can explode the memory: ``a = (); for i in range(100): a = (a, a); id(a)`` would likely need a 2**100-digits number. Solution 2 would be to add these hacks specially for cases that CPython caches: I think by now we're only missing empty or single-char strings or unicodes, and empty tuple. Solution 3 would be to drop half of the rule, keeping only ``id(x)==id(y) => x is y``. This would be the easiest, as we could remove the complicated computations already done for longs or floats or complexes. We'd clearly document it as a difference from CPython. The question is what kind of code might break if we drop the case ``x is y => id(x)==id(y)``. A bient?t, Armin. From steve at pearwood.info Sun May 5 13:20:25 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 05 May 2013 21:20:25 +1000 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: <51864079.50100@pearwood.info> On 05/05/13 19:59, Armin Rigo wrote: > Hi all, > > I'm just wondering again about some "bug" reports that are not bugs, > about people misusing "is" to compare two immutable objects. The > current situation in PyPy is that "is" works like "==" for ints, > longs, floats or complexes. It does not for strs or unicodes or > tuples. I don't understand why immutability comes into this. The `is` operator is supposed to test whether the two operands are the same object, nothing more, nothing less. Immutable, or mutable, it makes no difference. Now, it may be that *some* immutable objects may (implicitly, or explicitly) promise that you will never have two objects with the same value. For example, float might cache every object created, so that once you have created a float 23.45910234718, it will *always* be reused whenever a float with that value is needed. That would be allowed. But if float does not cache the value, and so you have two different float objects, with different IDs, then it is absolutely wrong for PyPy to treat `is` as == instead of testing object identity. Have I misunderstood what you are saying? -- Steven From amauryfa at gmail.com Sun May 5 13:38:11 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sun, 5 May 2013 13:38:11 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Hi, 2013/5/5 Armin Rigo > Hi all, > > I'm just wondering again about some "bug" reports that are not bugs, > about people misusing "is" to compare two immutable objects. The > current situation in PyPy is that "is" works like "==" for ints, > longs, floats or complexes. It does not for strs or unicodes or > tuples. Now of course someone on python-dev was (indirectly) > complaining that you can compare in CPython ``x is ' '``, which works > because single-character strings are cached, but not in PyPy. I'm > sure someone else has been bitten by writing in CPython ``x is ()``, > which is also cached there. > Strings are not always cached; with CPython2.7: >>> x = u'?'.encode('ascii', 'ignore') >>> x == '', x is '' (True, False) > (Fwiw I think that there is a design flaw somewhere in Python, to > allow "1 is 1" to be executed without any error but also without any > well-defined result...) > > Can we fix it once and for all? It's annoying because of id: if we > want ``x is y`` for equal huge strings x and y, but still want > ``id(x)==id(y)``, then we have to compute ``id(some_string)`` in a > rather slow way, producing a huge number. The same for tuples: if we > always want ``(1, 2) is (1, 2)`` then we need to compute > ``id(some_tuple)`` recursively, which can also lead to huge numbers. > In fact such a definition can explode the memory: ``a = (); for i in > range(100): a = (a, a); id(a)`` would likely need a 2**100-digits > number. > > Solution 2 would be to add these hacks specially for cases that > CPython caches: I think by now we're only missing empty or single-char > strings or unicodes, and empty tuple. > > Solution 3 would be to drop half of the rule, keeping only > ``id(x)==id(y) => x is y``. This would be the easiest, as we could > remove the complicated computations already done for longs or floats > or complexes. We'd clearly document it as a difference from CPython. > The question is what kind of code might break if we drop the case ``x > is y => id(x)==id(y)``. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun May 5 19:35:50 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 5 May 2013 19:35:50 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: <51864079.50100@pearwood.info> References: <51864079.50100@pearwood.info> Message-ID: On Sun, May 5, 2013 at 1:20 PM, Steven D'Aprano wrote: > On 05/05/13 19:59, Armin Rigo wrote: >> >> Hi all, >> >> I'm just wondering again about some "bug" reports that are not bugs, >> about people misusing "is" to compare two immutable objects. The >> current situation in PyPy is that "is" works like "==" for ints, >> longs, floats or complexes. It does not for strs or unicodes or >> tuples. > > > I don't understand why immutability comes into this. The `is` operator is > supposed to test whether the two operands are the same object, nothing more, > nothing less. Immutable, or mutable, it makes no difference. > > Now, it may be that *some* immutable objects may (implicitly, or explicitly) > promise that you will never have two objects with the same value. For > example, float might cache every object created, so that once you have > created a float 23.45910234718, it will *always* be reused whenever a float > with that value is needed. That would be allowed. > > But if float does not cache the value, and so you have two different float > objects, with different IDs, then it is absolutely wrong for PyPy to treat > `is` as == instead of testing object identity. > > Have I misunderstood what you are saying? Immutability is important because you can't cache immutable objects. It's true what you're saying, but we consistently see bug reports about people comparing ints or strings with is and complaining that they work fine on cpython, but not on pypy. Also, you expect to have the same identity if you store stuff in the list and then read out of it - which is impossible if you don't actually have any objects in the list, just store unwrapped ones. Cheers, fijal From steve at pearwood.info Sun May 5 21:16:24 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 06 May 2013 05:16:24 +1000 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: <51864079.50100@pearwood.info> Message-ID: <5186B008.9080400@pearwood.info> On 06/05/13 03:35, Maciej Fijalkowski wrote: > On Sun, May 5, 2013 at 1:20 PM, Steven D'Aprano wrote: >> On 05/05/13 19:59, Armin Rigo wrote: >>> >>> Hi all, >>> >>> I'm just wondering again about some "bug" reports that are not bugs, >>> about people misusing "is" to compare two immutable objects. The >>> current situation in PyPy is that "is" works like "==" for ints, >>> longs, floats or complexes. It does not for strs or unicodes or >>> tuples. >> >> >> I don't understand why immutability comes into this. The `is` operator is >> supposed to test whether the two operands are the same object, nothing more, >> nothing less. Immutable, or mutable, it makes no difference. >> >> Now, it may be that *some* immutable objects may (implicitly, or explicitly) >> promise that you will never have two objects with the same value. For >> example, float might cache every object created, so that once you have >> created a float 23.45910234718, it will *always* be reused whenever a float >> with that value is needed. That would be allowed. >> >> But if float does not cache the value, and so you have two different float >> objects, with different IDs, then it is absolutely wrong for PyPy to treat >> `is` as == instead of testing object identity. >> >> Have I misunderstood what you are saying? > > Immutability is important because you can't cache immutable objects. Yes, I know that :-) but that has nothing to do with the behaviour of `is`. > It's true what you're saying, but we consistently see bug reports > about people comparing ints or strings with is and complaining that > they work fine on cpython, but not on pypy. Then their code is buggy, not PyPy. But you know that :-) I don't believe that PyPy should take extraordinary effort to protect people from the consequences of writing buggy code. But putting that aside, I would expect that: x is y <=> id(x) == id(y) The docs say: "The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. x is not y yields the inverse truth value." http://docs.python.org/2/reference/expressions.html#index-68 and "id(object) Return the ?identity? of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value." http://docs.python.org/2/library/functions.html#id So each object has a single, unique, constant ID during its lifetime. So if id(x) == id(y) and x and y overlap in their lifetime, that implies that x and y are the same object. Likewise, if x and y are the same object, that implies that they have the same ID. > Also, you expect to have > the same identity if you store stuff in the list and then read out of > it - which is impossible if you don't actually have any objects in the > list, just store unwrapped ones. Ah, now that is an interesting question! My lack of experience with PyPy is going to show now. I take it that PyPy might optimize away the objects inside a list, storing only unboxed values? This is a really hard question. If I do this: a = b = X # regardless of what X is mylist = [a, None] assert mylist[0] is a assert mylist[0] is b both assertions must pass, no matter what X is, whether mutable or immutable. But if the values in mylist get unwrapped, then you would have to reconstruct the object identities, and I imagine that this would be painful. But it would be a shame to give up the opportunity for optimizations that unboxing could give. Have I understood the nature of your problem correctly? -- Steven From arigo at tunes.org Sun May 5 21:18:42 2013 From: arigo at tunes.org (Armin Rigo) Date: Sun, 5 May 2013 21:18:42 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Hi Amaury, On Sun, May 5, 2013 at 1:38 PM, Amaury Forgeot d'Arc wrote: > Strings are not always cached; with CPython2.7: >>>> x = u'?'.encode('ascii', 'ignore') >>>> x == '', x is '' > (True, False) That's true, there are such cases, but that's partially irrelevant for this issue: strings that *sometimes,* or *often,* end up with the same id() in CPython. Should they also end up with the same id() in PyPy? A bient?t, Armin. From arigo at tunes.org Sun May 5 21:41:54 2013 From: arigo at tunes.org (Armin Rigo) Date: Sun, 5 May 2013 21:41:54 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: <5186B008.9080400@pearwood.info> References: <51864079.50100@pearwood.info> <5186B008.9080400@pearwood.info> Message-ID: Hi all, On Sun, May 5, 2013 at 9:16 PM, Steven D'Aprano wrote: >> It's true what you're saying, but we consistently see bug reports >> about people comparing ints or strings with is and complaining that >> they work fine on cpython, but not on pypy. > > Then their code is buggy, not PyPy. But you know that :-) This is precisely what this thread is about: such "buggy" code that uses "is" to compare two immutable objects. At this point, the question is not "would it cause any trouble in existing programs to say that "x is not y" when CPython in the same program says that "x is y", because we know that the answer to that is "yes". We already found out a perfectly reasonable fix for "small" objects: two equal ints are always "is"-identical and have the same id() in PyPy. This is a nice way to solve the above problem. If anything it creates the opposite problem: some code that works on PyPy might not work on CPython. If PyPy becomes used enough, CPython will then have to care about that too, and we'll end up with a well-defined definition of "is" on immutable objects :-) But we're not (yet) using the same idea on *all* types of immutable objects. So what we're concerned about now is whether it could be implemented efficiently: the answer could be "yes if we forget about strictly enforcing "x is y <=> id(x) == id(y)". So, the question: although it's documented to be wrong, would it actually cause any trouble to relax this requirement? > a = b = X # regardless of what X is > mylist = [a, None] > assert mylist[0] is a > assert mylist[0] is b > > both assertions must pass, no matter what X is, whether mutable or > immutable. I *think* that in this case the assertions cannot fail in PyPy either. If X is a string, then we get as "mylist[0]" an object that is a different W_StringObject but containing internally the same RPython-level string, and as such (because we tweaked "is") they compare "is"-identical. But that seems like a problem waiting to happen: if in the future we're using a list strategy for a list of single characters, then W_StringObjects containing single characters will be rebuilt out of an RPython list of characters, and not be "is"-identical under our current definition. In addition, the problem right now is about code like ``if x[5] is '.': ...`` which happens to work as expected on CPython, but not on PyPy. In PyPy's case the two strings x[5] and '.' are using different RPython-level strings. A bient?t, Armin. From osadchiy.ilya at gmail.com Sun May 5 22:11:45 2013 From: osadchiy.ilya at gmail.com (Ilya Osadchiy) Date: Sun, 5 May 2013 23:11:45 +0300 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: On Sun, May 5, 2013 at 12:59 PM, Armin Rigo wrote: > > Can we fix it once and for all? It's annoying because of id: if we > want ``x is y`` for equal huge strings x and y, but still want > ``id(x)==id(y)``, then we have to compute ``id(some_string)`` in a > rather slow way, producing a huge number. The same for tuples: if we > always want ``(1, 2) is (1, 2)`` then we need to compute > ``id(some_tuple)`` recursively, which can also lead to huge numbers. > In fact such a definition can explode the memory: ``a = (); for i in > range(100): a = (a, a); id(a)`` would likely need a 2**100-digits > number. If the "id(x)==id(y)" requirement is removed, does it mean that "x is y" for immutable types is simply "x==y"? So if we have ``a = (); for i in range(100): a = (a, a); b = (a, a)`` then "a is b" will be computationally expensive? -------------- next part -------------- An HTML attachment was scrubbed... URL: From micahel at gmail.com Sun May 5 22:40:16 2013 From: micahel at gmail.com (Michael Hudson-Doyle) Date: Mon, 6 May 2013 08:40:16 +1200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: On 5 May 2013 21:59, Armin Rigo wrote: > Hi all, > > I'm just wondering again about some "bug" reports that are not bugs, > about people misusing "is" to compare two immutable objects. The > current situation in PyPy is that "is" works like "==" for ints, > longs, floats or complexes. > I want to say something about negative zeroes here.... Cheers, mwh -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob at openend.se Mon May 6 00:10:18 2013 From: jacob at openend.se (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Mon, 6 May 2013 00:10:18 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: <5186B008.9080400@pearwood.info> Message-ID: <201305060010.18191.jacob@openend.se> Personally, I think that being implementation detail compatible with CPython is the way tio go if we want to achieve maximum popularity in the short run. Making a sane implementation (Armins third option) is the one that I think will serve the Python community in the best way in the long run. Using "is" as a comparison when you mean "==" is a bad meme that has been very hard to get rid of. The person who's opinion on this matter that I would value the most is Guido's. I suggest asking him. Jacob Sunday 05 May 2013 you wrote: > Hi all, > > On Sun, May 5, 2013 at 9:16 PM, Steven D'Aprano wrote: > >> It's true what you're saying, but we consistently see bug reports > >> about people comparing ints or strings with is and complaining that > >> they work fine on cpython, but not on pypy. > > > > Then their code is buggy, not PyPy. But you know that :-) > > This is precisely what this thread is about: such "buggy" code that > uses "is" to compare two immutable objects. At this point, the > question is not "would it cause any trouble in existing programs to > say that "x is not y" when CPython in the same program says that "x is > y", because we know that the answer to that is "yes". > > We already found out a perfectly reasonable fix for "small" objects: two > equal ints are always "is"-identical and have the same id() in PyPy. > This is a nice way to solve the above problem. If anything it creates > the opposite problem: some code that works on PyPy might not work on > CPython. If PyPy becomes used enough, CPython will then have to care > about that too, and we'll end up with a well-defined definition of > "is" on immutable objects :-) > > But we're not (yet) using the same idea on *all* types of immutable > objects. So what we're concerned about now is whether it could be > implemented efficiently: the answer could be "yes if we forget about > strictly enforcing "x is y <=> id(x) == id(y)". So, the question: > although it's documented to be wrong, would it actually cause any > trouble to relax this requirement? > > > a = b = X # regardless of what X is > > mylist = [a, None] > > assert mylist[0] is a > > assert mylist[0] is b > > > > both assertions must pass, no matter what X is, whether mutable or > > immutable. > > I *think* that in this case the assertions cannot fail in PyPy either. > If X is a string, then we get as "mylist[0]" an object that is a > different W_StringObject but containing internally the same > RPython-level string, and as such (because we tweaked "is") they > compare "is"-identical. But that seems like a problem waiting to > happen: if in the future we're using a list strategy for a list > of single characters, then W_StringObjects containing single > characters will be rebuilt out of an RPython list of characters, and > not be "is"-identical under our current definition. > > In addition, the problem right now is about code like ``if x[5] is > '.': ...`` which happens to work as expected on CPython, but not on > PyPy. In PyPy's case the two strings x[5] and '.' are using different > RPython-level strings. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From hodgestar at gmail.com Mon May 6 00:43:42 2013 From: hodgestar at gmail.com (Simon Cross) Date: Mon, 6 May 2013 00:43:42 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Solution 3 sounds bad since it breaks things in PyPy for people who were using "is" more correctly in CPython. From alex.gaynor at gmail.com Mon May 6 00:45:48 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 5 May 2013 15:45:48 -0700 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: I wonder if maybe we can't have some sort of flag to add extra compatibility warnings, and then have a warning when `is` is used ints, strings, etc? Alex On Sun, May 5, 2013 at 3:43 PM, Simon Cross wrote: > Solution 3 sounds bad since it breaks things in PyPy for people who > were using "is" more correctly in CPython. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hodgestar at gmail.com Mon May 6 00:48:50 2013 From: hodgestar at gmail.com (Simon Cross) Date: Mon, 6 May 2013 00:48:50 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: I was thinking along similar signs -- we could ask for things like "x is ''" or "x is 3" to be added to PEP8 (I think any use of "is" with a constant on one or more sides is likely suspect). From arigo at tunes.org Mon May 6 08:52:11 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 May 2013 08:52:11 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Hi Michael, On Sun, May 5, 2013 at 10:40 PM, Michael Hudson-Doyle wrote: > I want to say something about negative zeroes here.... Right: on floats it's not actually the usual equality, but equality of the bit pattern (using float2longlong). A bient?t, Armin. From arigo at tunes.org Mon May 6 08:54:49 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 May 2013 08:54:49 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Hi Simon, On Mon, May 6, 2013 at 12:48 AM, Simon Cross wrote: > I was thinking along similar signs -- we could ask for things like "x > is ''" or "x is 3" to be added to PEP8 (I think any use of "is" with a > constant on one or more sides is likely suspect). That may be a good idea. If the compiler emits SyntaxWarnings for these cases, then maybe it's all we need to cover most of the bad usages. A bient?t, Armin. From amauryfa at gmail.com Mon May 6 09:03:31 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 6 May 2013 09:03:31 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: 2013/5/6 Armin Rigo > On Sun, May 5, 2013 at 10:40 PM, Michael Hudson-Doyle > wrote: > > I want to say something about negative zeroes here.... > > Right: on floats it's not actually the usual equality, but equality of > the bit pattern (using float2longlong). Except for NaN... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Mon May 6 09:25:24 2013 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 6 May 2013 17:25:24 +1000 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: On 6 May 2013 17:03, Amaury Forgeot d'Arc wrote: > 2013/5/6 Armin Rigo >> >> On Sun, May 5, 2013 at 10:40 PM, Michael Hudson-Doyle >> wrote: >> > I want to say something about negative zeroes here.... >> >> Right: on floats it's not actually the usual equality, but equality of >> the bit pattern (using float2longlong). > > > Except for NaN... It's perfectly acceptable for NaN to `is` on their bit pattern. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely may reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to deny you those rights would be illegal without prior contractual agreement. From arigo at tunes.org Mon May 6 09:38:24 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 May 2013 09:38:24 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: Hi Ilya, On Sun, May 5, 2013 at 10:11 PM, Ilya Osadchiy wrote: > If the "id(x)==id(y)" requirement is removed, does it mean that "x is y" for > immutable types is simply "x==y"? > So if we have ``a = (); for i in range(100): a = (a, a); b = (a, a)`` then > "a is b" will be computationally expensive? It's not exactly ``x==y``: for tuples it means recursively checking that items are ``is``-identical. It's possible to avoid the computational explosion, like CPython did for equality a long time ago (up to maybe 2.3?) before it was removed. You basically want to check if the a and b objects are "in a bisimulation" or not, which can be done without visiting the same object more than once, for any connexion graph. The reason I think it's a good idea (or at least not a bad idea) to reintroduce the complexity of bisimulation where CPython removed it, is that the purpose is slightly different and not visible to the user at all. If I remember correctly it was removed because it had hard-to-explain effects on when and how many times the user's ``__eq__()`` methods were called; but there is no user-overridable code involved here, merely an "implementation detail". It could equivalently be solved by aggressively caching all tuple creation. A bient?t, Armin. From steve at pearwood.info Mon May 6 15:12:24 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 06 May 2013 23:12:24 +1000 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: <5187AC38.7030701@pearwood.info> On Mon, May 06, 2013 at 05:25:24PM +1000, William ML Leslie wrote: > On 6 May 2013 17:03, Amaury Forgeot d'Arc wrote: > > 2013/5/6 Armin Rigo > >> > >> On Sun, May 5, 2013 at 10:40 PM, Michael Hudson-Doyle > >> wrote: > >> > I want to say something about negative zeroes here.... > >> > >> Right: on floats it's not actually the usual equality, but equality of > >> the bit pattern (using float2longlong). > > > > > > Except for NaN... > > It's perfectly acceptable for NaN to `is` on their bit pattern. Not unless the implementation caches floats. Otherwise you could have two distinct instances with the same bit pattern. NANs are no different from other floats in that the language doesn't guarantee that there is only one of them. Unless an implementation ensures that there is *exactly one* float object with a given bit pattern, then you can have multiple instances of a specific NAN, and two NANs with the same bit pattern may be distinct objects. Although... a thought comes to mind. Since floats are immutable, you could add an abstraction between the actual objects in memory as seen by the low-level implementation, and what are seen as distinct objects by high-level Python code. So two floats with the same bit-pattern in two different memory locations could nevertheless be seen by Python as one instance. I have no idea whether this is plausible, or if PyPy already does this, or whether I'm talking sheer nonsense. Of course the IDs would have to be the same, and that's tricky, but I guess that's what this thread is about. -- Steven From arigo at tunes.org Mon May 6 16:03:53 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 May 2013 16:03:53 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: <5187AC38.7030701@pearwood.info> References: <5187AC38.7030701@pearwood.info> Message-ID: Hi Steven, On Mon, May 6, 2013 at 3:12 PM, Steven D'Aprano wrote: > I have no idea whether this is plausible, or if PyPy already does this, or > whether I'm talking sheer nonsense. PyPy already does this. > Of course the IDs would have to be the same, and that's tricky, but I guess > that's what this thread is about. This is not tricky: the id is the bitpattern seen as an 8-bytes integer. This thread is about the harder cases of strings and tuples. A bient?t, Armin. From james.d.masters at gmail.com Mon May 6 20:28:43 2013 From: james.d.masters at gmail.com (James Masters) Date: Mon, 6 May 2013 11:28:43 -0700 Subject: [pypy-dev] Leveraging PyPy to translate/compile into other scripting languages? Message-ID: Hi, I have a need to translate Python code into source code in other scripting languages - some open and others proprietary. The scope of the problem space is very narrow... essentially the need is to define/lookup variables, perform arithmetic operations, define/call functions, etc. but nothing complex like regular expression handling or even opening a file. Some of the target languages are dynamically typed and others are statically typed, so some level of type awareness is needed (preferably through inference). I currently have a partial implementation which uses the Python AST module and some string templating to express the AST nodes in the other languages. I am about to move on to building a symbol table capability and tackle type inference. Before I head too far down this road, I keep thinking back to PyPy and RPython and wondering if I could use portions of this work to help me accomplish what I'm trying to do. Any comments? Thanks, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Mon May 6 20:58:28 2013 From: santagada at gmail.com (Leonardo Santagada) Date: Mon, 6 May 2013 15:58:28 -0300 Subject: [pypy-dev] PyPy to asm.js Message-ID: asm.js[1] seems a somewhat reasonable target for pypy, maybe even the jit could be made to work with it (with an external javascript lib that calls eval for compilation of the asm.js traces). Is anyone looking it it? They are planning to add support for garbage collector in asm.js and also support for more interesting interaction between the asm.js binary and the javascript ecosystem. [1] http://asmjs.org/faq.html -- Leonardo Santagada -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon May 6 21:27:36 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 May 2013 21:27:36 +0200 Subject: [pypy-dev] PyPy to asm.js In-Reply-To: References: Message-ID: I don't think there is anyone looking into that right now. On Mon, May 6, 2013 at 8:58 PM, Leonardo Santagada wrote: > asm.js[1] seems a somewhat reasonable target for pypy, maybe even the jit > could be made to work with it (with an external javascript lib that calls > eval for compilation of the asm.js traces). Is anyone looking it it? > > They are planning to add support for garbage collector in asm.js and also > support for more interesting interaction between the asm.js binary and the > javascript ecosystem. > > > [1] http://asmjs.org/faq.html > > -- > > Leonardo Santagada > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Tue May 7 14:40:54 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 7 May 2013 14:40:54 +0200 Subject: [pypy-dev] PyPy 2.0 alpha for ARM Message-ID: Hello. We're pleased to announce an alpha release of PyPy 2.0 for ARM. This is mostly a technology preview, as we know the JIT is not yet stable enough for the full release. However please try your stuff on ARM and report back. This is the first release that supports a range of ARM devices - anything with ARMv6 (like the Raspberry Pi) or ARMv7 (like Beagleboard, Chromebook, Cubieboard, etc.) that supports VFPv3 should work. We provide builds with support for both ARM EABI variants: hard-float and some older operating systems soft-float. This release comes with a list of limitations, consider it alpha quality, not suitable for production: * stackless support is missing. * assembler produced is not always correct, but we successfully managed to run large parts of our extensive benchmark suite, so most stuff should work. You can download the PyPy 2.0 alpha ARM release here: http://pypy.org/download.html Part of the work was sponsored by the `Raspberry Pi foundation`_. .. _`Raspberry Pi foundation`: http://www.raspberrypi.org/ What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7.3. It's fast due to its integrated tracing JIT compiler. This release supports ARM machines running Linux 32bit. Both hard-float ``armhf`` and soft-float ``armel`` builds are provided. ``armhf`` builds are created using the Raspberry Pi custom `cross-compilation toolchain`_ based on gcc-arm-linux-gnueabihf and should work on ARMv6 and ARMv7 devices running at least debian or ubuntu. ``armel`` builds are built using gcc-arm-linux-gnuebi toolchain provided by ubuntu and currently target ARMv7. If there is interest in other builds, such as gnueabi for ARMv6 or without requiring a VFP let us know in the comments or in IRC. .. _`cross-compilation toolchain`: https://github.com/raspberrypi Benchmarks ========== Everybody loves benchmarks. Here is a table of our benchmark suite (for ARM we don't provide it yet on http://speed.pypy.org, unfortunately). This is a comparison of Cortex A9 processor with 4M cache and Xeon W3580 with 8M of L3 cache. The set of benchmarks is a subset of what we run for http://speed.pypy.org that finishes in reasonable time. The ARM machine was provided by Calxeda. Columns are respectively: * benchmark name * PyPy speedup over CPython on ARM (Cortex A9) * PyPy speedup over CPython on x86 (Xeon) * speedup on Xeon vs Cortex A9, as measured on CPython * speedup on Xeon vs Cortex A9, as measured on PyPy * relative speedup (how much bigger the x86 speedup is over ARM speedup) (in case this table is not readable, please visit http://morepypy.blogspot.com/2013/05/pypy-20-alpha-for-arm.html) | Benchmark | PyPy vs CPython (arm) | PyPy vs CPython (x86) | x86 vs arm (pypy) | x86 vs arm (cpython) | relative speedup | | ai | 3.61 | 3.16 | 7.70 | 8.82 | 0.87 | | bm_mako | 3.41 | 2.11 | 8.56 | 13.82 | 0.62 | | chaos | 21.82 | 17.80 | 6.93 | 8.50 | 0.82 | | crypto_pyaes | 22.53 | 19.48 | 6.53 | 7.56 | 0.86 | | django | 13.43 | 11.16 | 7.90 | 9.51 | 0.83 | | eparse | 1.43 | 1.17 | 6.61 | 8.12 | 0.81 | | fannkuch | 6.22 | 5.36 | 6.18 | 7.16 | 0.86 | | float | 5.22 | 6.00 | 9.68 | 8.43 | 1.15 | | go | 4.72 | 3.34 | 5.91 | 8.37 | 0.71 | | hexiom2 | 8.70 | 7.00 | 7.69 | 9.56 | 0.80 | | html5lib | 2.35 | 2.13 | 6.59 | 7.26 | 0.91 | | json_bench | 1.12 | 0.93 | 7.19 | 8.68 | 0.83 | | meteor-contest | 2.13 | 1.68 | 5.95 | 7.54 | 0.79 | | nbody_modified | 8.19 | 7.78 | 6.08 | 6.40 | 0.95 | | pidigits | 1.27 | 0.95 | 14.67 | 19.66 | 0.75 | | pyflate-fast | 3.30 | 3.57 | 10.64 | 9.84 | 1.08 | | raytrace-simple | 46.41 | 29.00 | 5.14 | 8.23 | 0.62 | | richards | 31.48 | 28.51 | 6.95 | 7.68 | 0.91 | | slowspitfire | 1.28 | 1.14 | 5.91 | 6.61 | 0.89 | | spambayes | 1.93 | 1.27 | 4.15 | 6.30 | 0.66 | | sphinx | 1.01 | 1.05 | 7.76 | 7.45 | 1.04 | | spitfire | 1.55 | 1.58 | 5.62 | 5.49 | 1.02 | | spitfire_cstringio | 9.61 | 5.74 | 5.43 | 9.09 | 0.60 | | sympy_expand | 1.42 | 0.97 | 3.86 | 5.66 | 0.68 | | sympy_integrate | 1.60 | 0.95 | 4.24 | 7.12 | 0.60 | | sympy_str | 0.72 | 0.48 | 3.68 | 5.56 | 0.66 | | sympy_sum | 1.99 | 1.19 | 3.83 | 6.38 | 0.60 | | telco | 14.28 | 9.36 | 3.94 | 6.02 | 0.66 | | twisted_iteration | 11.60 | 7.33 | 6.04 | 9.55 | 0.63 | | twisted_names | 3.68 | 2.83 | 5.01 | 6.50 | 0.77 | | twisted_pb | 4.94 | 3.02 | 5.10 | 8.34 | 0.61 | It seems that Cortex A9, while significantly slower than Xeon, has higher slowdowns with a large interpreter (CPython) than a JIT compiler (PyPy). This comes as a surprise to me, especially that our ARM assembler is not nearly as polished as our x86 assembler. As for the causes, various people mentioned branch predictor, but I would not like to speculate without actually knowing. How to use PyPy? ================ We suggest using PyPy from a `virtualenv`_. Once you have a virtualenv installed, you can follow instructions from `pypy documentation`_ on how to proceed. This document also covers other `installation schemes`_. .. _`pypy documentation`: http://doc.pypy.org/en/latest/getting-started.html#installing-using-virtualenv .. _`virtualenv`: http://www.virtualenv.org/en/latest/ .. _`installation schemes`: http://doc.pypy.org/en/latest/getting-started.html#installing-pypy .. _`PyPy and pip`: http://doc.pypy.org/en/latest/getting-started.html#installing-pypy We would not recommend using in production PyPy on ARM just quite yet, however the day of a stable PyPy ARM release is not far off. Cheers, fijal, bivab, arigo and the whole PyPy team From naylor.b.david at gmail.com Wed May 8 22:14:08 2013 From: naylor.b.david at gmail.com (David Naylor) Date: Wed, 08 May 2013 13:14:08 -0700 (PDT) Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend Message-ID: <1507130.Uoj2OIuUtg@dragon.dg> Hi, I tried to translate pypy-2.0b2 on FreeBSD-9.1/i386 and I get the following error: [Timer] Timings: [Timer] annotate --- 218.3 s [Timer] rtype_ootype --- 137.0 s [Timer] ========================================== [Timer] Total: --- 355.4 s [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/goal/translate.py", line 317, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 733, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/tool/taskengine.py", line 114, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 284, in _do [translation:ERROR] res = func() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 360, in task_rtype_ootype [translation:ERROR] rtyper.specialize(dont_simplify_again=True) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 207, in specialize [translation:ERROR] self.specialize_more_blocks() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 250, in specialize_more_blocks [translation:ERROR] self.specialize_block(block) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 404, in specialize_block [translation:ERROR] self.gottypererror(e, block, hop.spaceop, newops) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 402, in specialize_block [translation:ERROR] self.translate_hl_to_ll(hop, varmapping) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 531, in translate_hl_to_ll [translation:ERROR] resultvar = hop.dispatch() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 759, in dispatch [translation:ERROR] return translate_meth(self) [translation:ERROR] File "<17296-codegen /tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py:611>", line 5, in translate_op_is_ [translation:ERROR] return pair(r_arg1, r_arg2).rtype_is_(hop) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rmodel.py", line 293, in rtype_is_ [translation:ERROR] return hop.rtyper.type_system.generic_is(robj1, robj2, hop) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/typesystem.py", line 179, in generic_is [translation:ERROR] roriginal1, roriginal2)) [translation:ERROR] TyperError: is of instances of the non-instances: , [translation:ERROR] .. (pypy.objspace.std.setobject:517)IntegerSetStrategy._difference_update_unwrapped [translation:ERROR] .. block at 3 with 2 exits(v961) [translation:ERROR] .. v964 = is_(v962, v963) Is the CLI supported in general, on FreeBSD or broken? Regards -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From naylor.b.david at gmail.com Wed May 8 21:16:01 2013 From: naylor.b.david at gmail.com (David Naylor) Date: Wed, 08 May 2013 22:16:01 +0300 Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend In-Reply-To: <1507130.Uoj2OIuUtg@dragon.dg> References: <1507130.Uoj2OIuUtg@dragon.dg> Message-ID: <2149768.vmcdncTSDn@dragon.dg> On Wednesday, 8 May 2013 22:14:00 David Naylor wrote: > Hi, > > I tried to translate pypy-2.0b2 on FreeBSD-9.1/i386 and I get the following > error: > > [Timer] Timings: > [Timer] annotate --- 218.3 s > [Timer] rtype_ootype --- 137.0 s > [Timer] ========================================== > [Timer] Total: --- 355.4 s > [translation:ERROR] Error: > [translation:ERROR] Traceback (most recent call last): > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/goal/translat > e.py", line 317, in main [translation:ERROR] drv.proceed(goals) > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 733, in proceed [translation:ERROR] return self._execute(goals, > task_skip = self._maybe_skip()) [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/tool/taskengi > ne.py", line 114, in _execute [translation:ERROR] res = self._do(goal, > taskcallable, *args, **kwds) [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 284, in _do [translation:ERROR] res = func() > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 360, in task_rtype_ootype [translation:ERROR] > rtyper.specialize(dont_simplify_again=True) [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 207, in specialize [translation:ERROR] self.specialize_more_blocks() > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 250, in specialize_more_blocks [translation:ERROR] > self.specialize_block(block) > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 404, in specialize_block [translation:ERROR] self.gottypererror(e, > block, hop.spaceop, newops) [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 402, in specialize_block [translation:ERROR] > self.translate_hl_to_ll(hop, varmapping) > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 531, in translate_hl_to_ll [translation:ERROR] resultvar = > hop.dispatch() > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 759, in dispatch [translation:ERROR] return translate_meth(self) > [translation:ERROR] File "<17296-codegen > /tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py:611>", > line 5, in translate_op_is_ [translation:ERROR] return pair(r_arg1, > r_arg2).rtype_is_(hop) [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rmodel.py", line > 293, in rtype_is_ [translation:ERROR] return > hop.rtyper.type_system.generic_is(robj1, robj2, hop) [translation:ERROR] > File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/typesystem.py", > line 179, in generic_is [translation:ERROR] roriginal1, roriginal2)) > [translation:ERROR] TyperError: is of instances of the non-instances: > , [translation:ERROR] .. > (pypy.objspace.std.setobject:517)IntegerSetStrategy._difference_update_unwr > apped [translation:ERROR] .. block at 3 with 2 exits(v961) > [translation:ERROR] .. v964 = is_(v962, v963) I tried again and got the following error: [Timer] Timings: [Timer] annotate --- 157.6 s [Timer] rtype_ootype --- 27.7 s [Timer] ========================================== [Timer] Total: --- 185.3 s [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/goal/translate.py", line 317, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 733, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/tool/taskengine.py", line 114, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 284, in _do [translation:ERROR] res = func() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 360, in task_rtype_ootype [translation:ERROR] rtyper.specialize(dont_simplify_again=True) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 207, in specialize [translation:ERROR] self.specialize_more_blocks() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 250, in specialize_more_blocks [translation:ERROR] self.specialize_block(block) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 402, in specialize_block [translation:ERROR] self.translate_hl_to_ll(hop, varmapping) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 531, in translate_hl_to_ll [translation:ERROR] resultvar = hop.dispatch() [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 759, in dispatch [translation:ERROR] return translate_meth(self) [translation:ERROR] File "<17262-codegen /tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py:601>", line 4, in translate_op_simple_call [translation:ERROR] return r_arg1.rtype_simple_call(hop) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rbuiltin.py", line 117, in rtype_simple_call [translation:ERROR] return self._call(hop2) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rbuiltin.py", line 108, in _call [translation:ERROR] v_result = bltintyper(hop2, **kwds_i) [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rlib/objectmodel.py", line 370, in specialize_call [translation:ERROR] hop.gendirectcall(r_list.LIST._ll_resize_hint, v_list, v_sizehint) [translation:ERROR] AttributeError: 'List' object has no attribute '_ll_resize_hint' Also of note: # mono --version Mono JIT compiler version 3.0.3 (tarball Wed May 8 19:40:23 UTC 2013) Copyright (C) 2002-2012 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notification: kqueue Architecture: x86 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC and Parallel Mark) Regards -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Wed May 8 22:26:39 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 8 May 2013 22:26:39 +0200 Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend In-Reply-To: <1507130.Uoj2OIuUtg@dragon.dg> References: <1507130.Uoj2OIuUtg@dragon.dg> Message-ID: CLI backend is not really supported any more On Wed, May 8, 2013 at 10:14 PM, David Naylor wrote: > Hi, > > I tried to translate pypy-2.0b2 on FreeBSD-9.1/i386 and I get the following error: > > [Timer] Timings: > [Timer] annotate --- 218.3 s > [Timer] rtype_ootype --- 137.0 s > [Timer] ========================================== > [Timer] Total: --- 355.4 s > [translation:ERROR] Error: > [translation:ERROR] Traceback (most recent call last): > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/goal/translate.py", line 317, in main > [translation:ERROR] drv.proceed(goals) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 733, in proceed > [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/tool/taskengine.py", line 114, in _execute > [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 284, in _do > [translation:ERROR] res = func() > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", line 360, in task_rtype_ootype > [translation:ERROR] rtyper.specialize(dont_simplify_again=True) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 207, in specialize > [translation:ERROR] self.specialize_more_blocks() > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 250, in specialize_more_blocks > [translation:ERROR] self.specialize_block(block) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 404, in specialize_block > [translation:ERROR] self.gottypererror(e, block, hop.spaceop, newops) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 402, in specialize_block > [translation:ERROR] self.translate_hl_to_ll(hop, varmapping) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 531, in translate_hl_to_ll > [translation:ERROR] resultvar = hop.dispatch() > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line 759, in dispatch > [translation:ERROR] return translate_meth(self) > [translation:ERROR] File "<17296-codegen /tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py:611>", line 5, in translate_op_is_ > [translation:ERROR] return pair(r_arg1, r_arg2).rtype_is_(hop) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rmodel.py", line 293, in rtype_is_ > [translation:ERROR] return hop.rtyper.type_system.generic_is(robj1, robj2, hop) > [translation:ERROR] File "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/typesystem.py", line 179, in generic_is > [translation:ERROR] roriginal1, roriginal2)) > [translation:ERROR] TyperError: is of instances of the non-instances: , > [translation:ERROR] .. (pypy.objspace.std.setobject:517)IntegerSetStrategy._difference_update_unwrapped > [translation:ERROR] .. block at 3 with 2 exits(v961) > [translation:ERROR] .. v964 = is_(v962, v963) > > Is the CLI supported in general, on FreeBSD or broken? > > Regards > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From alex.gaynor at gmail.com Wed May 8 22:35:00 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Wed, 8 May 2013 13:35:00 -0700 Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> Message-ID: We should probably just delete it at this point, it's completely unmaintained, doesn't work, and just confuses people; if someone wants to ressurect it, hg log should be good enough. Alex On Wed, May 8, 2013 at 1:26 PM, Maciej Fijalkowski wrote: > CLI backend is not really supported any more > > On Wed, May 8, 2013 at 10:14 PM, David Naylor > wrote: > > Hi, > > > > I tried to translate pypy-2.0b2 on FreeBSD-9.1/i386 and I get the > following error: > > > > [Timer] Timings: > > [Timer] annotate --- 218.3 s > > [Timer] rtype_ootype --- 137.0 s > > [Timer] ========================================== > > [Timer] Total: --- 355.4 s > > [translation:ERROR] Error: > > [translation:ERROR] Traceback (most recent call last): > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/goal/translate.py", > line 317, in main > > [translation:ERROR] drv.proceed(goals) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 733, in proceed > > [translation:ERROR] return self._execute(goals, task_skip = > self._maybe_skip()) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/tool/taskengine.py", > line 114, in _execute > > [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 284, in _do > > [translation:ERROR] res = func() > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/translator/driver.py", > line 360, in task_rtype_ootype > > [translation:ERROR] rtyper.specialize(dont_simplify_again=True) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 207, in specialize > > [translation:ERROR] self.specialize_more_blocks() > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 250, in specialize_more_blocks > > [translation:ERROR] self.specialize_block(block) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 404, in specialize_block > > [translation:ERROR] self.gottypererror(e, block, hop.spaceop, newops) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 402, in specialize_block > > [translation:ERROR] self.translate_hl_to_ll(hop, varmapping) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 531, in translate_hl_to_ll > > [translation:ERROR] resultvar = hop.dispatch() > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py", line > 759, in dispatch > > [translation:ERROR] return translate_meth(self) > > [translation:ERROR] File "<17296-codegen > /tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rtyper.py:611>", > line 5, in translate_op_is_ > > [translation:ERROR] return pair(r_arg1, r_arg2).rtype_is_(hop) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/rmodel.py", line > 293, in rtype_is_ > > [translation:ERROR] return hop.rtyper.type_system.generic_is(robj1, > robj2, hop) > > [translation:ERROR] File > "/tmp/tmp/pypy/work/pypy-pypy-4b60269153b5/rpython/rtyper/typesystem.py", > line 179, in generic_is > > [translation:ERROR] roriginal1, roriginal2)) > > [translation:ERROR] TyperError: is of instances of the non-instances: > , > > [translation:ERROR] .. > (pypy.objspace.std.setobject:517)IntegerSetStrategy._difference_update_unwrapped > > [translation:ERROR] .. block at 3 with 2 exits(v961) > > [translation:ERROR] .. v964 = is_(v962, v963) > > > > Is the CLI supported in general, on FreeBSD or broken? > > > > Regards > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From naylor.b.david at gmail.com Wed May 8 21:50:14 2013 From: naylor.b.david at gmail.com (David Naylor) Date: Wed, 08 May 2013 22:50:14 +0300 Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> Message-ID: <1725112.nlyumnnUK3@dragon.dg> On Wednesday, 8 May 2013 13:35:00 Alex Gaynor wrote: > We should probably just delete it at this point, it's completely > unmaintained, doesn't work, and just confuses people; if someone wants to > ressurect it, hg log should be good enough. Isn't JVM in a similar state (although I think someone may be working on it, if my memory of emails on the list serves me). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Wed May 8 23:12:14 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 8 May 2013 23:12:14 +0200 Subject: [pypy-dev] Translating pypy on FreeBSD with CLI backend In-Reply-To: <1725112.nlyumnnUK3@dragon.dg> References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> Message-ID: On Wed, May 8, 2013 at 9:50 PM, David Naylor wrote: > On Wednesday, 8 May 2013 13:35:00 Alex Gaynor wrote: >> We should probably just delete it at this point, it's completely >> unmaintained, doesn't work, and just confuses people; if someone wants to >> ressurect it, hg log should be good enough. > > Isn't JVM in a similar state (although I think someone may be working on it, > if my memory of emails on the list serves me). It is From Ronny.Pfannschmidt at gmx.de Wed May 8 23:26:01 2013 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 08 May 2013 23:26:01 +0200 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> Message-ID: <518AC2E9.6010200@gmx.de> Hi all, since there basically is no maintained ootype backend i wonder about removing the ootype vs lltype abstraction together with them, comments? -- Ronny On 05/08/2013 11:12 PM, Maciej Fijalkowski wrote: > On Wed, May 8, 2013 at 9:50 PM, David Naylor wrote: >> On Wednesday, 8 May 2013 13:35:00 Alex Gaynor wrote: >>> We should probably just delete it at this point, it's completely >>> unmaintained, doesn't work, and just confuses people; if someone wants to >>> ressurect it, hg log should be good enough. >> >> Isn't JVM in a similar state (although I think someone may be working on it, >> if my memory of emails on the list serves me). > > It is > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From alex.gaynor at gmail.com Wed May 8 23:27:57 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Wed, 8 May 2013 14:27:57 -0700 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: <518AC2E9.6010200@gmx.de> References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> <518AC2E9.6010200@gmx.de> Message-ID: I agree with this, the abstraction doesn't really work well right now, there's way too much code duplication. If we seriously want to have an lltype/ootype distinction this should be redone from scratch (IMO). Alex On Wed, May 8, 2013 at 2:26 PM, Ronny Pfannschmidt < Ronny.Pfannschmidt at gmx.de> wrote: > Hi all, > > since there basically is no maintained ootype backend > i wonder about removing the ootype vs lltype abstraction together with > them, > > comments? > > -- Ronny > > On 05/08/2013 11:12 PM, Maciej Fijalkowski wrote: > >> On Wed, May 8, 2013 at 9:50 PM, David Naylor> >> wrote: >> >>> On Wednesday, 8 May 2013 13:35:00 Alex Gaynor wrote: >>> >>>> We should probably just delete it at this point, it's completely >>>> unmaintained, doesn't work, and just confuses people; if someone wants >>>> to >>>> ressurect it, hg log should be good enough. >>>> >>> >>> Isn't JVM in a similar state (although I think someone may be working on >>> it, >>> if my memory of emails on the list serves me). >>> >> >> It is >> ______________________________**_________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/**mailman/listinfo/pypy-dev >> > > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Thu May 9 10:01:09 2013 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 09 May 2013 10:01:09 +0200 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> <518AC2E9.6010200@gmx.de> Message-ID: <518B57C5.10001@gmail.com> On 05/08/2013 11:27 PM, Alex Gaynor wrote: > I agree with this, the abstraction doesn't really work well right now, there's > way too much code duplication. If we seriously want to have an lltype/ootype > distinction this should be redone from scratch (IMO). Although I have an emotional feeling with that piece of code, I think that Alex is right. From arigo at tunes.org Thu May 9 11:19:24 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 9 May 2013 11:19:24 +0200 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: <518B57C5.10001@gmail.com> References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> <518AC2E9.6010200@gmx.de> <518B57C5.10001@gmail.com> Message-ID: Hi all, On Thu, May 9, 2013 at 10:01 AM, Antonio Cuni wrote: > Although I have an emotional feeling with that piece of code, I think that > Alex is right. I also tend to agree. Killing stuff that nobody seriously cares about is sad but good, particularly when it adds some otherwise-unnecessary levels of abstractions everywhere. We should ideally wait e.g. one month for feedback from other developers that may still have plans there. And no, before someone asks, asmjs wouldn't need the OO backend but more likely hacks on top of the LL backend. The OO-vs-LL levels of abstractions are wrong there. A bient?t, Armin. From fuzzyman at gmail.com Thu May 9 17:46:36 2013 From: fuzzyman at gmail.com (Michael Foord) Date: Thu, 9 May 2013 16:46:36 +0100 Subject: [pypy-dev] PyPy bitbucket notifications Message-ID: At some point I was added to the pypy user on bitbucket. Ever since I've received a stream of notifications about the pypy repo being forked. In an effort to reduce these emails I clicked on the "manage notifications" link from one of these emails and unwatched all the pypy repos. It seems I've managed to unwatch these repos on behalf of the pypy user! This is probably not a bad thing (and if it is my sincere apologies - it doesn't seem to be easily undoable). Any user who wants these notifications can simply watch these repos with their individual username rather than via the pypy user. All the best, Michael Foord -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Thu May 9 20:39:45 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 9 May 2013 20:39:45 +0200 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich Message-ID: We're pleased to announce PyPy 2.0. This is a stable release that brings a swath of bugfixes, small performance improvements and compatibility fixes. PyPy 2.0 is a big step for us and we hope in the future we'll be able to provide stable releases more often. You can download the PyPy 2.0 release here: http://pypy.org/download.html The two biggest changes since PyPy 1.9 are: * stackless is now supported including greenlets, which means eventlet and gevent should work (but read below about gevent) * PyPy now contains release 0.6 of `cffi`_ as a builtin module, which is preferred way of calling C from Python that works well on PyPy .. _`cffi`: http://cffi.readthedocs.org If you're using PyPy for anything, it would help us immensely if you fill out the following survey: http://bit.ly/pypysurvey This is for the developers eyes and we will not make any information public without your agreement. What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast (`pypy 2.0 and cpython 2.7.3`_ performance comparison) due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows 32. Windows 64 work is still stalling, we would welcome a volunteer to handle that. ARM support is on the way, as you can see from the recently released alpha for ARM. .. _`pypy 2.0 and cpython 2.7.3`: http://speed.pypy.org Highlights ========== * Stackless including greenlets should work. For gevent, you need to check out `pypycore`_ and use the `pypy-hacks`_ branch of gevent. * cffi is now a module included with PyPy. (`cffi`_ also exists for CPython; the two versions should be fully compatible.) It is the preferred way of calling C from Python that works on PyPy. * Callbacks from C are now JITted, which means XML parsing is much faster. * A lot of speed improvements in various language corners, most of them small, but speeding up some particular corners a lot. * The JIT was refactored to emit machine code which manipulates a "frame" that lives on the heap rather than on the stack. This is what makes Stackless work, and it could bring another future speed-up (not done yet). * A lot of stability issues fixed. * Refactoring much of the numpypy array classes, which resulted in removal of lazy expression evaluation. On the other hand, we now have more complete dtype support and support more array attributes. .. _`pypycore`: https://github.com/gevent-on-pypy/pypycore/ .. _`pypy-hacks`: https://github.com/schmir/gevent/tree/pypy-hacks Cheers, fijal, arigo and the PyPy team From alex.e.susu at gmail.com Thu May 9 21:34:33 2013 From: alex.e.susu at gmail.com (RCU) Date: Thu, 09 May 2013 22:34:33 +0300 Subject: [pypy-dev] How can I make more readable the C code obtained from the PyPy translate Message-ID: <518BFA49.8010102@gmail.com> Hello. I am new to PyPy. I managed to write a few RPython programs and translate them with PyPy translate. As a few others have noticed, as well, (see for example http://mail.python.org/pipermail/pypy-dev/2010-December/006616.html, http://grokbase.com/t/python/pypy-dev/124mqreh2r/output-readable-c and https://bugs.pypy.org/issue1220), the generated C code is very cryptic (when compared to the input RPython script). As far as I understand, this is so because of the following facts: - the RPython code gets compiled to Python bytecode and then translated to more basic operations (an IR which I think it does not have a particular name in the PyPy toolchain - or does it? :) ) - heavy optimizations are being applied on this IR before generating code with the C backend. So, is there any simple way to generate more readable C code (more similar, if possible, to RPython script) - maybe some translate.py options I am missing? Thank you, Alex From felipecruz at loogica.net Thu May 9 23:15:11 2013 From: felipecruz at loogica.net (Felipe Cruz) Date: Thu, 9 May 2013 18:15:11 -0300 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Hi Maciej! * Callbacks from C are now JITted, which means XML parsing is much faster. You mean, cffi callbacks? regards, 2013/5/9 Maciej Fijalkowski > We're pleased to announce PyPy 2.0. This is a stable release that brings > a swath of bugfixes, small performance improvements and compatibility > fixes. > PyPy 2.0 is a big step for us and we hope in the future we'll be able to > provide stable releases more often. > > You can download the PyPy 2.0 release here: > > http://pypy.org/download.html > > The two biggest changes since PyPy 1.9 are: > > * stackless is now supported including greenlets, which means eventlet > and gevent should work (but read below about gevent) > > * PyPy now contains release 0.6 of `cffi`_ as a builtin module, which > is preferred way of calling C from Python that works well on PyPy > > .. _`cffi`: http://cffi.readthedocs.org > > If you're using PyPy for anything, it would help us immensely if you fill > out > the following survey: http://bit.ly/pypysurvey This is for the developers > eyes and we will not make any information public without your agreement. > > What is PyPy? > ============= > > PyPy is a very compliant Python interpreter, almost a drop-in replacement > for > CPython 2.7. It's fast (`pypy 2.0 and cpython 2.7.3`_ performance > comparison) > due to its integrated tracing JIT compiler. > > This release supports x86 machines running Linux 32/64, Mac OS X 64 or > Windows 32. Windows 64 work is still stalling, we would welcome a > volunteer > to handle that. ARM support is on the way, as you can see from the recently > released alpha for ARM. > > .. _`pypy 2.0 and cpython 2.7.3`: http://speed.pypy.org > > Highlights > ========== > > * Stackless including greenlets should work. For gevent, you need to check > out `pypycore`_ and use the `pypy-hacks`_ branch of gevent. > > * cffi is now a module included with PyPy. (`cffi`_ also exists for > CPython; the two versions should be fully compatible.) It is the > preferred way of calling C from Python that works on PyPy. > > * Callbacks from C are now JITted, which means XML parsing is much faster. > > * A lot of speed improvements in various language corners, most of them > small, > but speeding up some particular corners a lot. > > * The JIT was refactored to emit machine code which manipulates a "frame" > that lives on the heap rather than on the stack. This is what makes > Stackless work, and it could bring another future speed-up (not done > yet). > > * A lot of stability issues fixed. > > * Refactoring much of the numpypy array classes, which resulted in removal > of > lazy expression evaluation. On the other hand, we now have more complete > dtype support and support more array attributes. > > .. _`pypycore`: https://github.com/gevent-on-pypy/pypycore/ > .. _`pypy-hacks`: https://github.com/schmir/gevent/tree/pypy-hacks > > Cheers, > fijal, arigo and the PyPy team > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- Felipe Cruz http://about.me/felipecruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Thu May 9 23:17:20 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 9 May 2013 14:17:20 -0700 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: cffi callbacks, as well as those in RPython (like those used by expat) Alex On Thu, May 9, 2013 at 2:15 PM, Felipe Cruz wrote: > Hi Maciej! > > * Callbacks from C are now JITted, which means XML parsing is much faster. > > You mean, cffi callbacks? > > regards, > > > 2013/5/9 Maciej Fijalkowski > >> We're pleased to announce PyPy 2.0. This is a stable release that brings >> a swath of bugfixes, small performance improvements and compatibility >> fixes. >> PyPy 2.0 is a big step for us and we hope in the future we'll be able to >> provide stable releases more often. >> >> You can download the PyPy 2.0 release here: >> >> http://pypy.org/download.html >> >> The two biggest changes since PyPy 1.9 are: >> >> * stackless is now supported including greenlets, which means eventlet >> and gevent should work (but read below about gevent) >> >> * PyPy now contains release 0.6 of `cffi`_ as a builtin module, which >> is preferred way of calling C from Python that works well on PyPy >> >> .. _`cffi`: http://cffi.readthedocs.org >> >> If you're using PyPy for anything, it would help us immensely if you fill >> out >> the following survey: http://bit.ly/pypysurvey This is for the developers >> eyes and we will not make any information public without your agreement. >> >> What is PyPy? >> ============= >> >> PyPy is a very compliant Python interpreter, almost a drop-in replacement >> for >> CPython 2.7. It's fast (`pypy 2.0 and cpython 2.7.3`_ performance >> comparison) >> due to its integrated tracing JIT compiler. >> >> This release supports x86 machines running Linux 32/64, Mac OS X 64 or >> Windows 32. Windows 64 work is still stalling, we would welcome a >> volunteer >> to handle that. ARM support is on the way, as you can see from the >> recently >> released alpha for ARM. >> >> .. _`pypy 2.0 and cpython 2.7.3`: http://speed.pypy.org >> >> Highlights >> ========== >> >> * Stackless including greenlets should work. For gevent, you need to check >> out `pypycore`_ and use the `pypy-hacks`_ branch of gevent. >> >> * cffi is now a module included with PyPy. (`cffi`_ also exists for >> CPython; the two versions should be fully compatible.) It is the >> preferred way of calling C from Python that works on PyPy. >> >> * Callbacks from C are now JITted, which means XML parsing is much faster. >> >> * A lot of speed improvements in various language corners, most of them >> small, >> but speeding up some particular corners a lot. >> >> * The JIT was refactored to emit machine code which manipulates a "frame" >> that lives on the heap rather than on the stack. This is what makes >> Stackless work, and it could bring another future speed-up (not done >> yet). >> >> * A lot of stability issues fixed. >> >> * Refactoring much of the numpypy array classes, which resulted in >> removal of >> lazy expression evaluation. On the other hand, we now have more complete >> dtype support and support more array attributes. >> >> .. _`pypycore`: https://github.com/gevent-on-pypy/pypycore/ >> .. _`pypy-hacks`: https://github.com/schmir/gevent/tree/pypy-hacks >> >> Cheers, >> fijal, arigo and the PyPy team >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > > > -- > Felipe Cruz > http://about.me/felipecruz > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipecruz at loogica.net Thu May 9 23:35:43 2013 From: felipecruz at loogica.net (Felipe Cruz) Date: Thu, 9 May 2013 18:35:43 -0300 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Thanks Alex! great news! crongrats all! 2013/5/9 Alex Gaynor > cffi callbacks, as well as those in RPython (like those used by expat) > > Alex > > > On Thu, May 9, 2013 at 2:15 PM, Felipe Cruz wrote: > >> Hi Maciej! >> >> * Callbacks from C are now JITted, which means XML parsing is much faster. >> >> You mean, cffi callbacks? >> >> regards, >> >> >> 2013/5/9 Maciej Fijalkowski >> >>> We're pleased to announce PyPy 2.0. This is a stable release that brings >>> a swath of bugfixes, small performance improvements and compatibility >>> fixes. >>> PyPy 2.0 is a big step for us and we hope in the future we'll be able to >>> provide stable releases more often. >>> >>> You can download the PyPy 2.0 release here: >>> >>> http://pypy.org/download.html >>> >>> The two biggest changes since PyPy 1.9 are: >>> >>> * stackless is now supported including greenlets, which means eventlet >>> and gevent should work (but read below about gevent) >>> >>> * PyPy now contains release 0.6 of `cffi`_ as a builtin module, which >>> is preferred way of calling C from Python that works well on PyPy >>> >>> .. _`cffi`: http://cffi.readthedocs.org >>> >>> If you're using PyPy for anything, it would help us immensely if you >>> fill out >>> the following survey: http://bit.ly/pypysurvey This is for the >>> developers >>> eyes and we will not make any information public without your agreement. >>> >>> What is PyPy? >>> ============= >>> >>> PyPy is a very compliant Python interpreter, almost a drop-in >>> replacement for >>> CPython 2.7. It's fast (`pypy 2.0 and cpython 2.7.3`_ performance >>> comparison) >>> due to its integrated tracing JIT compiler. >>> >>> This release supports x86 machines running Linux 32/64, Mac OS X 64 or >>> Windows 32. Windows 64 work is still stalling, we would welcome a >>> volunteer >>> to handle that. ARM support is on the way, as you can see from the >>> recently >>> released alpha for ARM. >>> >>> .. _`pypy 2.0 and cpython 2.7.3`: http://speed.pypy.org >>> >>> Highlights >>> ========== >>> >>> * Stackless including greenlets should work. For gevent, you need to >>> check >>> out `pypycore`_ and use the `pypy-hacks`_ branch of gevent. >>> >>> * cffi is now a module included with PyPy. (`cffi`_ also exists for >>> CPython; the two versions should be fully compatible.) It is the >>> preferred way of calling C from Python that works on PyPy. >>> >>> * Callbacks from C are now JITted, which means XML parsing is much >>> faster. >>> >>> * A lot of speed improvements in various language corners, most of them >>> small, >>> but speeding up some particular corners a lot. >>> >>> * The JIT was refactored to emit machine code which manipulates a "frame" >>> that lives on the heap rather than on the stack. This is what makes >>> Stackless work, and it could bring another future speed-up (not done >>> yet). >>> >>> * A lot of stability issues fixed. >>> >>> * Refactoring much of the numpypy array classes, which resulted in >>> removal of >>> lazy expression evaluation. On the other hand, we now have more >>> complete >>> dtype support and support more array attributes. >>> >>> .. _`pypycore`: https://github.com/gevent-on-pypy/pypycore/ >>> .. _`pypy-hacks`: https://github.com/schmir/gevent/tree/pypy-hacks >>> >>> Cheers, >>> fijal, arigo and the PyPy team >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >> >> >> >> -- >> Felipe Cruz >> http://about.me/felipecruz >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > > > -- > "I disapprove of what you say, but I will defend to the death your right > to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: 125F 5C67 DFE9 4084 > -- Felipe Cruz http://about.me/felipecruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddvento at ucar.edu Thu May 9 23:50:00 2013 From: ddvento at ucar.edu (Davide Del Vento) Date: Thu, 09 May 2013 15:50:00 -0600 Subject: [pypy-dev] How can I make more readable the C code obtained from the PyPy translate In-Reply-To: <518BFA49.8010102@gmail.com> References: <518BFA49.8010102@gmail.com> Message-ID: <518C1A08.6010405@ucar.edu> Disclaimer: this is just my opinion and I'm not a pypy developer. I don't think what you want exists in pypy and I don't think it would be useful. If you need to look at the generated C code (why?), you may probably want to look at cython. On 05/09/2013 01:34 PM, RCU wrote: > Hello. > I am new to PyPy. > > I managed to write a few RPython programs and translate them with > PyPy translate. > As a few others have noticed, as well, (see for example > http://mail.python.org/pipermail/pypy-dev/2010-December/006616.html, > http://grokbase.com/t/python/pypy-dev/124mqreh2r/output-readable-c and > https://bugs.pypy.org/issue1220), the generated C code is very cryptic > (when compared to the input RPython script). > As far as I understand, this is so because of the following facts: > - the RPython code gets compiled to Python bytecode and then > translated to more basic operations (an IR which I think it does not > have a particular name in the PyPy toolchain - or does it? :) ) > - heavy optimizations are being applied on this IR before > generating code with the C backend. > > So, is there any simple way to generate more readable C code (more > similar, if possible, to RPython script) - maybe some translate.py > options I am missing? > > Thank you, > Alex > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Fri May 10 00:32:14 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 10 May 2013 00:32:14 +0200 Subject: [pypy-dev] How can I make more readable the C code obtained from the PyPy translate In-Reply-To: <518BFA49.8010102@gmail.com> References: <518BFA49.8010102@gmail.com> Message-ID: Hi Alex, On Thu, May 9, 2013 at 9:34 PM, RCU wrote: > I managed to write a few RPython programs and translate them with PyPy > translate. I'm sure you've read the usual warning: http://doc.pypy.org/en/latest/faq.html#do-i-have-to-rewrite-my-programs-in-rpython . This contains the implicit answer to your question: if your goal is not to get a good GC and JIT compiler for a dynamic language interpreter, but you focus more on getting readable C code from small RPython programs, then well, write your program in C in the first place... RPython is not designed to produce simple C code. A bient?t, Armin. From drsalists at gmail.com Fri May 10 04:56:04 2013 From: drsalists at gmail.com (Dan Stromberg) Date: Thu, 9 May 2013 19:56:04 -0700 Subject: [pypy-dev] SSL version? Message-ID: I get an error when trying to run pypy 2.0 on a Debian Wheezy system: dstromberg at deskie:~/src/home-svn/backshift/trunk$ /usr/local/pypy-2.0/bin/pypy /usr/local/pypy-2.0/bin/pypy: error while loading shared libraries: libssl.so.0.9.8: cannot open shared object file: No such file or directory dstromberg at deskie:~/src/home-svn/backshift/trunk$ Isn't libssl 0.9.8 getting kind of old? I don't see it in synaptic. From arigo at tunes.org Fri May 10 08:33:21 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 10 May 2013 08:33:21 +0200 Subject: [pypy-dev] SSL version? In-Reply-To: References: Message-ID: Hi Dan, On Fri, May 10, 2013 at 4:56 AM, Dan Stromberg wrote: > I get an error when trying to run pypy 2.0 on a Debian Wheezy system: ...oups, sorry, our 32-bit chrooted buildslave is Ubuntu 10.04, and not (as I thought first) a similar Debian 6 Squeeze. Fixed the links. So anyway, as we mention on the links (I guess I'll make this **bold**), the binaries are for precise systems. If you want to complain that we should instead use distribution X on our buildslave for Y, then I fear that the answer is "Linux distributions are hard and yes we know it". If you want to try to convince us that a buildslave with Ubuntu 12.04 would be really more useful than Ubuntu 10.04 by now, then come to IRC: we'd be happy to give you access to the buildslave if you (or anyone else for that matter) want to do the upgrade. It's a schroot using so far "Ubuntu Lucid 10.04 for i386 (session chroot)". A bient?t, Armin. From tismer at stackless.com Fri May 10 11:26:33 2013 From: tismer at stackless.com (Christian Tismer) Date: Fri, 10 May 2013 11:26:33 +0200 Subject: [pypy-dev] x is y <=> id(x)==id(y) In-Reply-To: References: Message-ID: <518CBD49.4010005@stackless.com> On 06.05.13 08:54, Armin Rigo wrote: > Hi Simon, > > On Mon, May 6, 2013 at 12:48 AM, Simon Cross wrote: >> I was thinking along similar signs -- we could ask for things like "x >> is ''" or "x is 3" to be added to PEP8 (I think any use of "is" with a >> constant on one or more sides is likely suspect). > That may be a good idea. If the compiler emits SyntaxWarnings for > these cases, then maybe it's all we need to cover most of the bad > usages. > I highly appreciate this idea, too! Educating people to avoid mis-use of "is" has probably more impact in the long term, because the pep8 module is pretty often used as a measure of code cleaning. cheers - chris -- Christian Tismer :^) Software Consulting : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de phone +49 173 24 18 776 fax +49 (30) 700143-0023 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From kura at tangentlabs.co.uk Fri May 10 11:50:13 2013 From: kura at tangentlabs.co.uk (Kura) Date: Fri, 10 May 2013 10:50:13 +0100 Subject: [pypy-dev] SSL version? In-Reply-To: References: Message-ID: <518CC2D5.70005@tangentlabs.co.uk> I'd be happy to help with building for different versions of Debian or Ubuntu. I myself use a version of Debian that SSL for PyPy does not work on on almost all of my servers and tend to have to translate PyPy quite frequently. On 10/05/13 07:33, Armin Rigo wrote: > Hi Dan, > > On Fri, May 10, 2013 at 4:56 AM, Dan Stromberg wrote: >> I get an error when trying to run pypy 2.0 on a Debian Wheezy system: > > ...oups, sorry, our 32-bit chrooted buildslave is Ubuntu 10.04, and > not (as I thought first) a similar Debian 6 Squeeze. Fixed the links. > > So anyway, as we mention on the links (I guess I'll make this > **bold**), the binaries are for precise systems. If you want to > complain that we should instead use distribution X on our buildslave > for Y, then I fear that the answer is "Linux distributions are hard > and yes we know it". If you want to try to convince us that a > buildslave with Ubuntu 12.04 would be really more useful than Ubuntu > 10.04 by now, then come to IRC: we'd be happy to give you access to > the buildslave if you (or anyone else for that matter) want to do the > upgrade. It's a schroot using so far "Ubuntu Lucid 10.04 for i386 > (session chroot)". > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- Kura Systems Engineer Tangent Labs t: @kuramanga e: kura at tangentlabs.co.uk w: http://syslog.tv/ t: +44 (0)20 7462 6100 m: +44 (0)7525 767114 My email is signed with my PGP key by default. The key is available on all public servers, on this email as "0x49FCF4D9.asc" and also has a public URL to the key embedded in this email's headers. -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x49FCF4D9.asc Type: application/pgp-keys Size: 14999 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: kura.vcf Type: text/x-vcard Size: 317 bytes Desc: not available URL: From fijall at gmail.com Fri May 10 12:59:22 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 10 May 2013 12:59:22 +0200 Subject: [pypy-dev] SSL version? In-Reply-To: <518CC2D5.70005@tangentlabs.co.uk> References: <518CC2D5.70005@tangentlabs.co.uk> Message-ID: On Fri, May 10, 2013 at 11:50 AM, Kura wrote: > I'd be happy to help with building for different versions of Debian or > Ubuntu. > > I myself use a version of Debian that SSL for PyPy does not work on on > almost all of my servers and tend to have to translate PyPy quite > frequently. Hi Stefano Rivera is our debian maintainer, maybe he has an opinion how you can help. Put him on the CC Cheers, fijal From phyo.arkarlwin at gmail.com Fri May 10 15:45:39 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 10 May 2013 20:15:39 +0630 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Good news. Unfortunately there is a dependency on libtinfo.so : ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory On Fri, May 10, 2013 at 1:09 AM, Maciej Fijalkowski wrote: > We're pleased to announce PyPy 2.0. This is a stable release that brings > a swath of bugfixes, small performance improvements and compatibility > fixes. > PyPy 2.0 is a big step for us and we hope in the future we'll be able to > provide stable releases more often. > > You can download the PyPy 2.0 release here: > > http://pypy.org/download.html > > The two biggest changes since PyPy 1.9 are: > > * stackless is now supported including greenlets, which means eventlet > and gevent should work (but read below about gevent) > > * PyPy now contains release 0.6 of `cffi`_ as a builtin module, which > is preferred way of calling C from Python that works well on PyPy > > .. _`cffi`: http://cffi.readthedocs.org > > If you're using PyPy for anything, it would help us immensely if you fill > out > the following survey: http://bit.ly/pypysurvey This is for the developers > eyes and we will not make any information public without your agreement. > > What is PyPy? > ============= > > PyPy is a very compliant Python interpreter, almost a drop-in replacement > for > CPython 2.7. It's fast (`pypy 2.0 and cpython 2.7.3`_ performance > comparison) > due to its integrated tracing JIT compiler. > > This release supports x86 machines running Linux 32/64, Mac OS X 64 or > Windows 32. Windows 64 work is still stalling, we would welcome a > volunteer > to handle that. ARM support is on the way, as you can see from the recently > released alpha for ARM. > > .. _`pypy 2.0 and cpython 2.7.3`: http://speed.pypy.org > > Highlights > ========== > > * Stackless including greenlets should work. For gevent, you need to check > out `pypycore`_ and use the `pypy-hacks`_ branch of gevent. > > * cffi is now a module included with PyPy. (`cffi`_ also exists for > CPython; the two versions should be fully compatible.) It is the > preferred way of calling C from Python that works on PyPy. > > * Callbacks from C are now JITted, which means XML parsing is much faster. > > * A lot of speed improvements in various language corners, most of them > small, > but speeding up some particular corners a lot. > > * The JIT was refactored to emit machine code which manipulates a "frame" > that lives on the heap rather than on the stack. This is what makes > Stackless work, and it could bring another future speed-up (not done > yet). > > * A lot of stability issues fixed. > > * Refactoring much of the numpypy array classes, which resulted in removal > of > lazy expression evaluation. On the other hand, we now have more complete > dtype support and support more array attributes. > > .. _`pypycore`: https://github.com/gevent-on-pypy/pypycore/ > .. _`pypy-hacks`: https://github.com/schmir/gevent/tree/pypy-hacks > > Cheers, > fijal, arigo and the PyPy team > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Fri May 10 16:07:09 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 10 May 2013 16:07:09 +0200 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: 2013/5/10 Phyo Arkar > Unfortunately there is a dependency on libtinfo.so : > > ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot > open shared object file: No such file or directory > Yes, this is certainly pulled by the _curses module. But this was also the case with pypy-1.9. On which system are you running? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Fri May 10 16:16:55 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 10 May 2013 20:46:55 +0630 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Sabayon Linux64bit i am not sure i tested 1.9 on this laptop On May 10, 2013 8:37 PM, "Amaury Forgeot d'Arc" wrote: > > 2013/5/10 Phyo Arkar > >> Unfortunately there is a dependency on libtinfo.so : >> >> ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot >> open shared object file: No such file or directory >> > > Yes, this is certainly pulled by the _curses module. > > But this was also the case with pypy-1.9. > On which system are you running? > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Fri May 10 16:55:02 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 10 May 2013 07:55:02 -0700 (PDT) Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Phyo, On Fri, 10 May 2013, Phyo Arkar wrote: > ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot > open shared object file: No such file or directory not every distro splits up tinfo and ncurses. On SuSE, what I did was to provide a symlink libtinfo.so.5 -> libncurses.so and that satisfied the dependencies. Might be that that works on your system, too. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From phyo.arkarlwin at gmail.com Fri May 10 17:02:25 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 10 May 2013 21:32:25 +0630 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Ah that's why i cannot find libtinfo.so thanks i will try that. On May 10, 2013 9:25 PM, wrote: > Phyo, > > On Fri, 10 May 2013, Phyo Arkar wrote: > >> ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot >> open shared object file: No such file or directory >> > > not every distro splits up tinfo and ncurses. On SuSE, what I did was to > provide a symlink libtinfo.so.5 -> libncurses.so and that satisfied the > dependencies. Might be that that works on your system, too. > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Fri May 10 17:07:43 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 10 May 2013 21:37:43 +0630 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: just did that. got ELF header error @ /usr/lib64/libtinfo.so.5 On May 10, 2013 9:32 PM, "Phyo Arkar" wrote: > Ah that's why i cannot find libtinfo.so > thanks i will try that. > On May 10, 2013 9:25 PM, wrote: > >> Phyo, >> >> On Fri, 10 May 2013, Phyo Arkar wrote: >> >>> ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot >>> open shared object file: No such file or directory >>> >> >> not every distro splits up tinfo and ncurses. On SuSE, what I did was to >> provide a symlink libtinfo.so.5 -> libncurses.so and that satisfied the >> dependencies. Might be that that works on your system, too. >> >> Best regards, >> Wim >> -- >> WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Fri May 10 19:31:20 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 10 May 2013 10:31:20 -0700 (PDT) Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Phyo, On Fri, 10 May 2013, Phyo Arkar wrote: > just did that. got ELF header error @ > /usr/lib64/libtinfo.so.5 what ELF header error? (Point being: pypy is linked with libncurses.so as well, so that library has to be correct to begin with.) Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From pjenvey at underboss.org Fri May 10 19:57:17 2013 From: pjenvey at underboss.org (Philip Jenvey) Date: Fri, 10 May 2013 10:57:17 -0700 Subject: [pypy-dev] [pypy-commit] cffi default: Try to preserve the exact error message In-Reply-To: <20130510140718.358E31C00F4@cobra.cs.uni-duesseldorf.de> References: <20130510140718.358E31C00F4@cobra.cs.uni-duesseldorf.de> Message-ID: <1BBAEB1C-031D-407F-A03E-D8990862A038@underboss.org> Could you please switch this to Py3 compat syntax, KeyError as e? On May 10, 2013, at 7:07 AM, arigo wrote: > Author: Armin Rigo > Branch: > Changeset: r1250:452b57d57304 > Date: 2013-05-10 16:07 +0200 > http://bitbucket.org/cffi/cffi/changeset/452b57d57304/ > > Log: Try to preserve the exact error message > > diff --git a/cffi/api.py b/cffi/api.py > --- a/cffi/api.py > +++ b/cffi/api.py > @@ -372,8 +372,8 @@ > BType = ffi._get_cached_btype(tp) > try: > value = backendlib.load_function(BType, name) > - except KeyError: > - raise AttributeError(name) > + except KeyError, e: > + raise AttributeError('%s: %s' % (name, e)) > library.__dict__[name] = value > return > # > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit -- Philip Jenvey From arigo at tunes.org Fri May 10 22:55:52 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 10 May 2013 22:55:52 +0200 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: Hi, On Fri, May 10, 2013 at 3:45 PM, Phyo Arkar wrote: > ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot open > shared object file: No such file or directory The Linux binaries are provided for *32-bit Ubuntu 10.04* as well as *64-bit Ubuntu 12.04*. If you are not using exactly these versions, it won't work, likely, or you need to hack a lot --- or you need to translate your own version from source --- or you need to wait until your distribution adds a package. A bient?t, Armin. From phyo.arkarlwin at gmail.com Sat May 11 00:47:37 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sat, 11 May 2013 05:17:37 +0630 Subject: [pypy-dev] PyPy 2.0 - Einstein Sandwich In-Reply-To: References: Message-ID: it used to work fine before. its not problem i can build from source. On May 11, 2013 3:26 AM, "Armin Rigo" wrote: > Hi, > > On Fri, May 10, 2013 at 3:45 PM, Phyo Arkar > wrote: > > ./bin/pypy: error while loading shared libraries: libtinfo.so.5: cannot > open > > shared object file: No such file or directory > > The Linux binaries are provided for *32-bit Ubuntu 10.04* as well as > *64-bit Ubuntu 12.04*. If you are not using exactly these versions, > it won't work, likely, or you need to hack a lot --- or you need to > translate your own version from source --- or you need to wait until > your distribution adds a package. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat May 11 08:14:16 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 11 May 2013 08:14:16 +0200 Subject: [pypy-dev] Using "rpath" Message-ID: Hi all, I think I know how to solve the two linking issues we were discussing on irc yesterday: using the "rpath" (thanks squeaky_pl for originally pointing it out). See for example http://stackoverflow.com/a/6323222/1556290 . 1) This could let us distribute more self-contained executables by putting the libffi.so, libssl.so, etc. along it in the same directory, and specifying an rpath of "$ORIGIN". 2) This could be used for building in "--shared" mode, with a small executable and a big libpypy.so in the same directory, again using the rpath "$ORIGIN". A bient?t, Armin. From g2p.code at gmail.com Sat May 11 10:21:14 2013 From: g2p.code at gmail.com (Gabriel de Perthuis) Date: Sat, 11 May 2013 08:21:14 +0000 (UTC) Subject: [pypy-dev] Using "rpath" References: Message-ID: On Sat, 11 May 2013 08:14:16 +0200, Armin Rigo wrote: > Hi all, > > I think I know how to solve the two linking issues we were discussing > on irc yesterday: using the "rpath" (thanks squeaky_pl for originally > pointing it out). See for example > http://stackoverflow.com/a/6323222/1556290 . > > 1) This could let us distribute more self-contained executables by > putting the libffi.so, libssl.so, etc. along it in the same directory, > and specifying an rpath of "$ORIGIN". > > 2) This could be used for building in "--shared" mode, with a small > executable and a big libpypy.so in the same directory, again using the > rpath "$ORIGIN". That has the same downsides as static linking wrt security. You can get dynamic linking against the distro version if you build against 0.9.8 headers and link against the so.0 library, though apparently some rpm distros lack the required symlink. http://stackoverflow.com/a/16270689/ The most portable approach is to build against the 0.9.8 headers, not link against anything, and dlopen at runtime. See - http://stackoverflow.com/questions/2827181/ - http://stackoverflow.com/a/16263876/ From arigo at tunes.org Sat May 11 10:43:04 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 11 May 2013 10:43:04 +0200 Subject: [pypy-dev] Using "rpath" In-Reply-To: References: Message-ID: Hi Gabriel, On Sat, May 11, 2013 at 10:21 AM, Gabriel de Perthuis wrote: > That has the same downsides as static linking wrt security. No: if you care, you can replace the static libraries with symlinks to the exact dynamic libraries of your system. (Anyway, if you really care about security, you're going to wait for the package-provided pypy.) A bient?t, Armin. From arigo at tunes.org Sat May 11 10:50:08 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 11 May 2013 10:50:08 +0200 Subject: [pypy-dev] Using "rpath" In-Reply-To: References: Message-ID: Hi again, On Sat, May 11, 2013 at 10:43 AM, Armin Rigo wrote: > (Anyway, if you really care about security, you're going to wait > for the package-provided pypy.) I mean of course the distribution-provided one. A bient?t, Armin. From arigo at tunes.org Sun May 12 10:57:01 2013 From: arigo at tunes.org (Armin Rigo) Date: Sun, 12 May 2013 10:57:01 +0200 Subject: [pypy-dev] Using "rpath" In-Reply-To: References: Message-ID: Hi all, Here I'm describing the best I could attempt about the problem of binary distributions. It's still a major mess to implement. It may be done some day, but basically at this point I'm looking for contributors. A first note: this is *only* about the Linux Binary distributions that we provide on pypy.org/download.html. It has no effect on the pypy provided (later) by your particular Linux distribution. The goal is to make it *work* in a reasonable way. The goal is *not* to be sure that it will automatically link with the latest version of libssl.so.x.y.z that you happen to have installed on your system under some non-automatically-guessable name, because the goal is not magic. It would however let you do that manually if you care to. (If you want something 100% automatic, go to your own Linux distribution and help them upgrade.) The idea would be to have in /opt/pypy-2.0.x/bin an executable "pypy", which dynamically links to libssl.so, libcrypto.so, libffi.so, etc.; but to also, by default, put actual libraries in the same directory as the executable, under the versionless names "libcrypto.so" etc., and configure the executable in such a way that it would load these libraries (without using LD_LIBRARY_PATH; see below). This is for the *work* part. If people downloading the linux binaries want better, they can remove the libraries and have the pypy finds them on their system; or if their system is missing files with the exact same name, they can instead put in /opt/pypy-2.0.x/bin some symlinks. (This is much better than having to stick the symlinks in /usr/lib/.) Well, that's the goal. It's of course not so simple. Here is what I found out: - First, I'm going to ignore the optional notion of "versioned symbols" here. - In a Linux .so file, there might be an entry "SONAME" embedded in the file, which is supposed to contain "libfoo.so.N" with a single number N. In the case of libcrypto/libssl it is in general not that, of course. It is for example "libcrypto.so.1.0.0". The SONAME might also be absent. - When we compile an executable with "gcc -lfoo" or "gcc -l:/path/libfoo.so" or just "gcc /path/libfoo.so", gcc looks up the real libfoo.so.x.y.z and reads it. It then puts the name of the library inside the executable, for later dynamic linking, as can be seen in the left column of "ldd executable". Now the 'name' written there is just the filename if the .so has no SONAME. But if the .so has an SONAME, it completely and definitely ignores the filename given on the command-line, and puts the SONAME instead. - Another option to gcc is "-Wl,-rpath=X" which embeds another entry into the produced binary: RPATH, giving it the value X. This should be a path: the "run-time search path". It may start with the special string "$ORIGIN", which is resolved to "wherever the binary actually is at run-time". - At run-time, the entries in that table are searched first in the RPATH if any, then in the system's default places. On a default "gcc -lcrypto" for example, gcc finds /usr/lib/libcrypto.so (with or without extra .x.y.z), loads its SONAME, finds "libcrypto.so.x.y.z", and so sticks "libcrypto.so.x.y.z" into the executable. At run-time, the dynamic linker will thus only look for a file "libcrypto.so.x.y.z". I found no way to tell gcc "ignore the SONAME", so there is no way to produce an executable that would contain a more general name. So the problems we're facing are: the SONAME of libssl/libcrypto are too precise for us; and anyway the libc itself has added incompatibilities in recent versions too. It's stupid but it seems that the only reasonable way forward is really to install on all buildslaves a custom chroot. This chroot would contain an old-enough version of the libraries like the libc. It would also contain a *custom* compiled version of libssl/libcrypto (and if needed libffi and all others) which does not include any SONAME. In this way, we can choose with gcc options what names end up in the binary. We can pick a general name, and package with the Linux binaries .so's with these general names. So, anyone up to the task? A bient?t, Armin. From _kfj at yahoo.com Sun May 12 11:12:52 2013 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Sun, 12 May 2013 11:12:52 +0200 Subject: [pypy-dev] cffi wrapper for libeinspline, a cubic B-spline library, runs on Pypy Message-ID: <518F5D14.3020407@yahoo.com> Hi group! I've been working on wrapping libeinspline, a comprehensive C library for creating and evaluating cubic B-splines in 1-3 dimensions and real and complex data types. The wrap is done using cffi, and it runs with Pypy and Cpython. I've now put an alpha version online at https://bitbucket.org/kfj/python-bspline I am pointing you to this project since it actually works well with Pypy and has a few bits in it which you may find inspiring. I'd welcome comments. I wrote the wrapper because I wanted B-splines for a Pypy pet project of mine and couldn't find anything suitable. It may be of use while a Pypy version of scipy hasn't materialized, and even has a few bits which I haven't found in scipy. Kay F. Jahnke From emil.kroymann at isaco.de Sun May 12 14:50:59 2013 From: emil.kroymann at isaco.de (Emil Kroymann) Date: Sun, 12 May 2013 14:50:59 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython Message-ID: <20130512145059.6e057c7b@descartes> Hi List, I discovered a segementation fault with pypy-2.0 and gevent, while playing with the dnspython library. I compiled pypy from the source tarball downloaded from the pypy website with jit enabled and installed gevent with pypy support as described in the pypycore repository. Below is a log of the simple steps needed to reproduce the problem. From the generated core file, it seems, the problem occurs in the minimark gc. I also attached the core file to this mail. Regards, Emil (pypy-gevent)emil at descartes:~/Play/pypy-gevent/test$ ulimit -c unlimited (pypy-gevent)emil at descartes:~/Play/pypy-gevent/test$ python Python 2.7.3 (b9c3566aa0170aaa736db0491d542c309ec7a5dc, May 12 2013, 11:22:30) [PyPy 2.0.0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``All problems in computer science can be solved by another level of indirection. --Butler Lampson'' >>>> from gevent import monkey; monkey.patch_all() >>>> from dns.resolver import query >>>> a = query('www.google.de') *** using ev loop >>>> dir(a) Speicherzugriffsfehler (Speicherabzug geschrieben) (pypy-gevent)emil at descartes:~/Play/pypy-gevent/test$ gdb --core core --args python GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04 Copyright (C) 2012 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /home/emil/.virtualenvs/pypy-gevent/bin/python...(no debugging symbols found)...done. [New LWP 3216] warning: Can't read pathname for load map: Eingabe-/Ausgabefehler. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `python'. Program terminated with signal 11, Segmentation fault. #0 0x000000000110b2a4 in pypy_g_walk_to_parent_frame () (gdb) bt #0 0x000000000110b2a4 in pypy_g_walk_to_parent_frame () #1 0x000000000110b75c in pypy_g_walk_stack_from () #2 0x000000000110b899 in pypy_g__asm_callback () #3 0x000000000130b725 in pypy_asm_stackwalk () #4 0x0000000001104f32 in pypy_g_MiniMarkGC_minor_collection.part.1 () #5 0x00000000011099f8 in pypy_g_MiniMarkGC_collect_and_reserve () #6 0x0000000000ee4482 in pypy_g_invalidate_loop () #7 0x000000000104f85f in pypy_g_QuasiImmut_invalidate () #8 0x0000000000cdbdc3 in pypy_g_ModuleDictStrategy_setitem_str () #9 0x0000000000419539 in pypy_g___mm_setitem_W_DictMultiObject_W_Root_W_Root () #10 0x0000000000c9a3ff in pypy_g_displayhook () #11 0x0000000000849ede in pypy_g_BuiltinCode1_fastcall_1 () #12 0x00000000008475b0 in pypy_g_funccall_valuestack__AccessDirect_None () #13 0x000000000086a86c in pypy_g_CALL_FUNCTION__AccessDirect_None () #14 0x000000000087357f in pypy_g_dispatch_bytecode__AccessDirect_None () #15 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #16 0x0000000000c827a2 in pypy_g_portal_4 () #17 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #18 0x000000000085e826 in pypy_g_PyFrame_run () #19 0x000000000080a67b in pypy_g_call_function__star_1 () #20 0x0000000000872295 in pypy_g_dispatch_bytecode__AccessDirect_None () #21 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #22 0x0000000000c827a2 in pypy_g_portal_4 () #23 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #24 0x000000000085e826 in pypy_g_PyFrame_run () #25 0x000000000086cff4 in pypy_g_EXEC_STMT__AccessDirect_None () #26 0x00000000008726aa in pypy_g_dispatch_bytecode__AccessDirect_None () #27 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #28 0x0000000000c827a2 in pypy_g_portal_4 () #29 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #30 0x000000000085e826 in pypy_g_PyFrame_run () #31 0x0000000000cda8ba in pypy_g_CALL_METHOD__AccessDirect_star_1 () #32 0x0000000000873867 in pypy_g_dispatch_bytecode__AccessDirect_None () #33 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #34 0x0000000000c827a2 in pypy_g_portal_4 () #35 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #36 0x000000000085e826 in pypy_g_PyFrame_run () #37 0x0000000000cda8ba in pypy_g_CALL_METHOD__AccessDirect_star_1 () ---Type to continue, or q to quit--- #38 0x0000000000873867 in pypy_g_dispatch_bytecode__AccessDirect_None () #39 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #40 0x0000000000c827a2 in pypy_g_portal_4 () #41 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #42 0x000000000085e826 in pypy_g_PyFrame_run () #43 0x0000000000cda8ba in pypy_g_CALL_METHOD__AccessDirect_star_1 () #44 0x0000000000873867 in pypy_g_dispatch_bytecode__AccessDirect_None () #45 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #46 0x0000000000c827a2 in pypy_g_portal_4 () #47 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #48 0x000000000085e826 in pypy_g_PyFrame_run () #49 0x000000000086a86c in pypy_g_CALL_FUNCTION__AccessDirect_None () #50 0x000000000087357f in pypy_g_dispatch_bytecode__AccessDirect_None () #51 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #52 0x0000000000c827a2 in pypy_g_portal_4 () #53 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #54 0x000000000085e826 in pypy_g_PyFrame_run () #55 0x0000000000cb71fd in pypy_g_call_args () #56 0x000000000086a74e in pypy_g_call_function__AccessDirect_None () #57 0x000000000087375c in pypy_g_dispatch_bytecode__AccessDirect_None () #58 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #59 0x0000000000c827a2 in pypy_g_portal_4 () #60 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #61 0x000000000085e826 in pypy_g_PyFrame_run () #62 0x00000000008473d3 in pypy_g_funccall_valuestack__AccessDirect_None () #63 0x000000000086a86c in pypy_g_CALL_FUNCTION__AccessDirect_None () #64 0x000000000087357f in pypy_g_dispatch_bytecode__AccessDirect_None () #65 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #66 0x0000000000c827a2 in pypy_g_portal_4 () #67 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #68 0x000000000085e826 in pypy_g_PyFrame_run () #69 0x0000000000cb71fd in pypy_g_call_args () #70 0x000000000086a74e in pypy_g_call_function__AccessDirect_None () #71 0x000000000087370f in pypy_g_dispatch_bytecode__AccessDirect_None () #72 0x0000000000876583 in pypy_g_handle_bytecode__AccessDirect_None () #73 0x0000000000c827a2 in pypy_g_portal_4 () #74 0x0000000001067113 in pypy_g_ll_portal_runner__Unsigned_Bool_pypy_interpreter () #75 0x000000000085e826 in pypy_g_PyFrame_run () ---Type to continue, or q to quit--- #76 0x000000000080a9ce in pypy_g_call_function__star_2 () #77 0x0000000000763aea in pypy_g_entry_point () #78 0x0000000001306234 in pypy_main_function () #79 0x00007f84c489476d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 #80 0x0000000000411c91 in _start () (gdb) quit -- Emil Kroymann VoIP Services Engineer Email: emil.kroymann at isaco.de Tel: +49-30-203899885 Mobile: +49-151-62820588 ISACO GmbH Kurf?rstenstra?e 79 10787 Berlin Germany Amtsgericht Charlottenburg, HRB 112464B Gesch?ftsf?hrer: Daniel Frommherz -------------- next part -------------- A non-text attachment was scrubbed... Name: core.gz Type: application/x-gzip Size: 7176935 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From ddvento at ucar.edu Sun May 12 15:32:03 2013 From: ddvento at ucar.edu (Davide Del Vento) Date: Sun, 12 May 2013 07:32:03 -0600 Subject: [pypy-dev] Using "rpath" In-Reply-To: References: Message-ID: <518F99D3.7050309@ucar.edu> Instead of using the gcc (actually ld) option "-Wl,-rpath=X", which may require changes in your build/link structure you can more easily use patchelf which is able to edit the RPATH of an existing binary. I use it often an is very convenient (however I don't have the different versions issue) You may also want to provide the libraries as a separate download. Regards, Davide Del Vento On 05/12/2013 02:57 AM, Armin Rigo wrote: > Hi all, > > Here I'm describing the best I could attempt about the problem of > binary distributions. It's still a major mess to implement. It may > be done some day, but basically at this point I'm looking for > contributors. > > A first note: this is *only* about the Linux Binary distributions that > we provide on pypy.org/download.html. It has no effect on the pypy > provided (later) by your particular Linux distribution. > > The goal is to make it *work* in a reasonable way. The goal is *not* > to be sure that it will automatically link with the latest version of > libssl.so.x.y.z that you happen to have installed on your system under > some non-automatically-guessable name, because the goal is not magic. > It would however let you do that manually if you care to. (If you > want something 100% automatic, go to your own Linux distribution and > help them upgrade.) > > The idea would be to have in /opt/pypy-2.0.x/bin an executable "pypy", > which dynamically links to libssl.so, libcrypto.so, libffi.so, etc.; > but to also, by default, put actual libraries in the same directory as > the executable, under the versionless names "libcrypto.so" etc., and > configure the executable in such a way that it would load these > libraries (without using LD_LIBRARY_PATH; see below). This is for the > *work* part. If people downloading the linux binaries want better, > they can remove the libraries and have the pypy finds them on their > system; or if their system is missing files with the exact same name, > they can instead put in /opt/pypy-2.0.x/bin some symlinks. (This is > much better than having to stick the symlinks in /usr/lib/.) > > Well, that's the goal. It's of course not so simple. Here is what I found out: > > - First, I'm going to ignore the optional notion of "versioned symbols" here. > > - In a Linux .so file, there might be an entry "SONAME" embedded in > the file, which is supposed to contain "libfoo.so.N" with a single > number N. In the case of libcrypto/libssl it is in general not that, > of course. It is for example "libcrypto.so.1.0.0". The SONAME might > also be absent. > > - When we compile an executable with "gcc -lfoo" or "gcc > -l:/path/libfoo.so" or just "gcc /path/libfoo.so", gcc looks up the > real libfoo.so.x.y.z and reads it. It then puts the name of the > library inside the executable, for later dynamic linking, as can be > seen in the left column of "ldd executable". Now the 'name' written > there is just the filename if the .so has no SONAME. But if the .so > has an SONAME, it completely and definitely ignores the filename given > on the command-line, and puts the SONAME instead. > > - Another option to gcc is "-Wl,-rpath=X" which embeds another entry > into the produced binary: RPATH, giving it the value X. This should > be a path: the "run-time search path". It may start with the special > string "$ORIGIN", which is resolved to "wherever the binary actually > is at run-time". > > - At run-time, the entries in that table are searched first in the > RPATH if any, then in the system's default places. > > On a default "gcc -lcrypto" for example, gcc finds > /usr/lib/libcrypto.so (with or without extra .x.y.z), loads its > SONAME, finds "libcrypto.so.x.y.z", and so sticks "libcrypto.so.x.y.z" > into the executable. At run-time, the dynamic linker will thus only > look for a file "libcrypto.so.x.y.z". I found no way to tell gcc > "ignore the SONAME", so there is no way to produce an executable that > would contain a more general name. > > So the problems we're facing are: the SONAME of libssl/libcrypto are > too precise for us; and anyway the libc itself has added > incompatibilities in recent versions too. > > It's stupid but it seems that the only reasonable way forward is > really to install on all buildslaves a custom chroot. This chroot > would contain an old-enough version of the libraries like the libc. > It would also contain a *custom* compiled version of libssl/libcrypto > (and if needed libffi and all others) which does not include any > SONAME. In this way, we can choose with gcc options what names end up > in the binary. We can pick a general name, and package with the Linux > binaries .so's with these general names. > > So, anyone up to the task? > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From bartwiegmans at gmail.com Sun May 12 19:59:45 2013 From: bartwiegmans at gmail.com (Bart Wiegmans) Date: Sun, 12 May 2013 19:59:45 +0200 Subject: [pypy-dev] Issue 1475 (add_memory_pressure()) Message-ID: Hi everybody, First, let me introduce myself. I'm Bart Wiegmans and I would like to work for pypy via GSoC this summer. To that end I proposed adding a concurrent / incremental garbage collector to PyPy. Reasonably, Fijal requested me to bring in some code before I could be accepted, and since issue 1475 (https://bugs.pypy.org/issue1475) is rather memory-related, it seems like a good fit. However, I've failed to make any progress on it (despite advice). In fact, I'd say that at this point I simply don't understand PyPy well enough to add this feature, and I'm not sure what to do next. Anyway, this way you know that I've tried, at least. If anyone has any more advice I'll happily try again, of course. Thank you for your time. Kind regards, Bart Wiegmans From info at nicegems.biz Mon May 13 07:16:31 2013 From: info at nicegems.biz (Widiyana Samudra) Date: Mon, 13 May 2013 13:16:31 +0800 Subject: [pypy-dev] Salam Sahajetra Message-ID: <41E3B270053316C86439C385434BAF0EC68BDA32@DONLITTLE-PC> Assalamualaikum Wr.Wb... Sebelum dan sesudahnya saya ingin memperkenalkan diri saya,nama saya WIDIYANA SAMUDRA berasal dari Indonesia dan bekerja di London (United Kingdom), ingin berkenalan dengan anda di sana. Saya ingin menawarkan satu peluang bisnis yang begitu bagus,dan bagi Anda yang berminat dengan bisnis ... Inilah kesempatan Anda,kapan lagi kalau bukan sekarang bukan.??? Produk ini di namakan (MULITE CLEANSER) kegunaannya untuk mencuci barangan yang sangat berharga sekali seperti batu intan permata yg masih mentah. Dan pada waktu yang sama perusahaan di tempat saya bekerja membutuhkan MULITE CLEANSER dimana mineral tersbut tersedia dari operator yg di Indonesia. Jadi bagi anda yang berminat dengan bisnis ini, saya ingin Anda menjadi Agent untuk menjual produk tersebut ke pada perusahaan tempat saya bekerja, (Anda membeli produk tersebut dari operator yang di Indonesia terlebih dahulu dengan harga 500USD(Lima Juta Rupiah) per karton dan dijualnya kembali ke pada perusahaan di tempat saya bekerja dengan harga 1,500USD(Lima belas Juta per karton) Yang paling penting Niat dan ikhlas untuk mencari rezeki semata-mata karena Allah. Insya Allah, Allah akan membantu kita untuk mencari rezeki yang halal dan berkat,saya ingin Anda buat keputusan yang bijak, Anda bisa merubahnya .. Insya Allah ' Apa bila ada berminat atau ada pertanyaan, silahkan balas email terus ke saya berdasarkan alamat di bawah atau silahkan berikan number supaya bisa di hubungi dan saya akan menerangkannya lebih lanjut di telepon: Insya Allah saya mencoba membantu Anda. Semoga ikatan silaturahmi sesama kita diberkati Allah. Selamat berkenalan. widiyana26 at yahoo.com Yang benar... WIDIYANA SAMUDRA -------------- next part -------------- An HTML attachment was scrubbed... URL: From janzert at janzert.com Mon May 13 11:12:29 2013 From: janzert at janzert.com (Janzert) Date: Mon, 13 May 2013 05:12:29 -0400 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: <20130512145059.6e057c7b@descartes> References: <20130512145059.6e057c7b@descartes> Message-ID: <5190AE7D.8000408@janzert.com> On 5/12/2013 8:50 AM, Emil Kroymann wrote: > Hi List, > ... > > Below is a log of the simple steps needed to reproduce the problem. > From the generated core file, it seems, the problem occurs in the > minimark gc. I also attached the core file to this mail. > > Regards, > Emil > Please don't send files, especially multi-megabyte files to mailing lists. A link with it hosted somewhere or the offer to send it to anyone that wants it would be much better. Janzert From arigo at tunes.org Mon May 13 18:21:14 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 13 May 2013 18:21:14 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: <20130512145059.6e057c7b@descartes> References: <20130512145059.6e057c7b@descartes> Message-ID: Hi Emil, On Sun, May 12, 2013 at 2:50 PM, Emil Kroymann wrote: > Below is a log of the simple steps needed to reproduce the problem. > From the generated core file, it seems, the problem occurs in the > minimark gc. I also attached the core file to this mail. Sorry, reproducing exactly the steps you describe doesn't crash for me. Can you give me some more steps to try, e.g. running a complete test suite which usually crashes early for you? A bient?t, Armin. From emil.kroymann at isaco.de Mon May 13 18:35:06 2013 From: emil.kroymann at isaco.de (Emil Kroymann) Date: Mon, 13 May 2013 18:35:06 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: References: <20130512145059.6e057c7b@descartes> Message-ID: <20130513183506.579e09e7@descartes> Hi Armin, I'm sorry, this is the only thing I tried so far. I just wanted to try out gevent on pypy and only reached this point. I noticed however, that sometimes the crash does not occur immediately (i.e. the dir command in the log I sent succeeds) and after a second the crash occurs. Can't you inspect the core file, I attached? Regards, Emil Am Mon, 13 May 2013 18:21:14 +0200 schrieb Armin Rigo : > Hi Emil, > > On Sun, May 12, 2013 at 2:50 PM, Emil Kroymann > wrote: > > Below is a log of the simple steps needed to reproduce the problem. > > From the generated core file, it seems, the problem occurs in the > > minimark gc. I also attached the core file to this mail. > > Sorry, reproducing exactly the steps you describe doesn't crash for > me. Can you give me some more steps to try, e.g. running a complete > test suite which usually crashes early for you? > > > A bient?t, > > Armin. > -- Emil Kroymann VoIP Services Engineer Email: emil.kroymann at isaco.de Tel: +49-30-203899885 Mobile: +49-151-62820588 ISACO GmbH Kurf?rstenstra?e 79 10787 Berlin Germany Amtsgericht Charlottenburg, HRB 112464B Gesch?ftsf?hrer: Daniel Frommherz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From fabio88.dorta at gmail.com Tue May 14 12:04:21 2013 From: fabio88.dorta at gmail.com (Fabio D'Orta) Date: Tue, 14 May 2013 12:04:21 +0200 Subject: [pypy-dev] Include .so modules generated with f2py from fortran script Message-ID: Hello to everyone, I'm new to python and especially to pypy. I have developed a code in Cpython with some modules, that were the bottle-neck, in fortran thanks to f2py. I'm translating the code as pypy could run (changing def that uses numpy not supported by numpypy) but I'm stuck in finding a way to ingest the .so module into pypy. I have tried with CFFI but with no luck. I have also considered to write the fortran modules (which basically solves linear equation and find the eigenvalues) in pypy but linalg is missing in numpypy. Could you help me? Thanks in advance for your time Fabio -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue May 14 17:03:49 2013 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 May 2013 17:03:49 +0200 Subject: [pypy-dev] Include .so modules generated with f2py from fortran script In-Reply-To: References: Message-ID: Hi Fabio, On Tue, May 14, 2013 at 12:04 PM, Fabio D'Orta wrote: > stuck in finding a way to ingest the .so module into pypy. I have tried with > CFFI but with no luck. This is what I would recommend you to try: CFFI. We cannot help you more without any more information about what the problem is, though. A bient?t, Armin. From fabio88.dorta at gmail.com Tue May 14 17:31:54 2013 From: fabio88.dorta at gmail.com (Fabio D'Orta) Date: Tue, 14 May 2013 17:31:54 +0200 Subject: [pypy-dev] Include .so modules generated with f2py from fortran script In-Reply-To: References: Message-ID: Hi Armin, thanks for your reply. The problem with CFFI regards the "undefined symbol: PyMem_Free" when I use ffi.dlopen. Trying to import the f2py builted up library 'libprovaf2py.so' from pypy shell, I obtain the following error: >>>>from cffi import FFI >>>>ffi = FFI() >>>>import ctypes.util >>>># Verifying if ctypes find libprovaf2py.so >>>>ctypes.util.find_library('provaf2py') 'libprovaf2py.so' >>>># Trying to open libprovaf2py.so >>>>ffi.dlopen('provaf2py') Traceback (most recent call last): File "", line 1, in File "/home/fabio/Desktop/PyPy2.0/pypy-2.0/lib_pypy/cffi/api.py", line 111, in dlopen lib, function_cache = _make_ffi_library(self, name, flags) File "/home/fabio/Desktop/PyPy2.0/pypy-2.0/lib_pypy/cffi/api.py", line 365, in _make_ffi_library backendlib = backend.load_library(path, flags) OSError: cannot load library libprovaf2py.so: /usr/lib/x86_64-linux-gnu/libprovaf2py.so: undefined symbol: PyMem_Free I'm using pypy 2.0 which includes CFFI on ubuntu 12.04 LTS. Don't know if may help, on linux shell (bash) the comand "nm libprovaf2py.so" show me that PyMem_Free is undefined. However, the same module imported directly without CFFI in CPython works. PS: the library 'libprovaf2py.so' is a simple test fortran90 subroutine which accept an integer input and print it to screen. Thanks again. A presto, Fabio 2013/5/14 Armin Rigo > Hi Fabio, > > On Tue, May 14, 2013 at 12:04 PM, Fabio D'Orta > wrote: > > stuck in finding a way to ingest the .so module into pypy. I have tried > with > > CFFI but with no luck. > > This is what I would recommend you to try: CFFI. We cannot help you > more without any more information about what the problem is, though. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue May 14 17:59:52 2013 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 May 2013 17:59:52 +0200 Subject: [pypy-dev] Include .so modules generated with f2py from fortran script In-Reply-To: References: Message-ID: Hi Fabio, On Tue, May 14, 2013 at 5:31 PM, Fabio D'Orta wrote: > thanks for your reply. > The problem with CFFI regards the "undefined symbol: PyMem_Free" when I use > ffi.dlopen. Ah, you're trying to import an .so that is a CPython C extension module. That's not what CFFI is for. With CFFI you can connect directly to C (and probably Fortran) code that is not specifically written for Python (i.e. doesn't contain "#include "). You can call C code from Python with CFFI; this C code may be living in its own (Python-independent) .so file, or may just be more sources that will be compiled along during the call to ffi.verify() if you use "sources=[...]". Look up http://cffi.readthedocs.org for more information. A bient?t, Armin. From stefano at rivera.za.net Tue May 14 21:16:44 2013 From: stefano at rivera.za.net (Stefano Rivera) Date: Tue, 14 May 2013 21:16:44 +0200 Subject: [pypy-dev] SSL version? In-Reply-To: <518CC2D5.70005@tangentlabs.co.uk> References: <518CC2D5.70005@tangentlabs.co.uk> Message-ID: <20130514191644.GP20957@bach.rivera.co.za> Sorry, a bit behind on my mail... Hi Kura (2013.05.10_11:50:13_+0200) > I'd be happy to help with building for different versions of Debian or > Ubuntu. > > I myself use a version of Debian that SSL for PyPy does not work on on > almost all of my servers and tend to have to translate PyPy quite > frequently. Running a daily automated build on Wheezy would probably be useful. I've done builds of 2.0.0 for wheezy [0] if that's useful to anyone. My packaging process is really centred around packaging for the distro, so I'm patching pypy a bit [1] (mainly, PEP3147 support). Those patches get out of date quickly, which is why I haven't got around to doing daily deb builds yet (I'm scared of how much maintainance work they are going to be). So, my plan is to just build a simple deb from a pypy translation, that doesn't interact with the Debian system much at all. I prepared something like that here [2], but must just turn that into a Python script that we can include in the pypy repo. I think it'll be fairly trivial. [0] http://people.debian.org/~stefanor/pypy/wheezy/ [1] http://anonscm.debian.org/gitweb/?p=collab-maint/pypy.git;a=tree;f=debian/patches [2] http://bitbucket.org/stefanor/pypy-upstream SR -- Stefano Rivera http://tumbleweed.org.za/ H: +27 21 461 1230 C: +27 72 419 8559 From amauryfa at gmail.com Tue May 14 22:06:01 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 14 May 2013 22:06:01 +0200 Subject: [pypy-dev] SSL version? In-Reply-To: <20130514191644.GP20957@bach.rivera.co.za> References: <518CC2D5.70005@tangentlabs.co.uk> <20130514191644.GP20957@bach.rivera.co.za> Message-ID: 2013/5/14 Stefano Rivera > so I'm patching pypy a bit [1] (mainly, PEP3147 support) (i.e. the __pycache__ directory) Could you compare your patch with the py3k version of pypy? I already found interesting differences in importing.py, with bugs or inefficiencies on both sides. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio88.dorta at gmail.com Wed May 15 11:51:07 2013 From: fabio88.dorta at gmail.com (Fabio D'Orta) Date: Wed, 15 May 2013 11:51:07 +0200 Subject: [pypy-dev] Include .so modules generated with f2py from fortran script In-Reply-To: References: Message-ID: Hi Armin, Thank you for the explanation. Knowing nothing about C (difficult syntax to understand) I will try the way to connect Fortran code. Have a nice day, Fabio 2013/5/14 Armin Rigo > Hi Fabio, > > On Tue, May 14, 2013 at 5:31 PM, Fabio D'Orta > wrote: > > thanks for your reply. > > The problem with CFFI regards the "undefined symbol: PyMem_Free" when I > use > > ffi.dlopen. > > Ah, you're trying to import an .so that is a CPython C extension > module. That's not what CFFI is for. With CFFI you can connect > directly to C (and probably Fortran) code that is not specifically > written for Python (i.e. doesn't contain "#include "). > > You can call C code from Python with CFFI; this C code may be living > in its own (Python-independent) .so file, or may just be more sources > that will be compiled along during the call to ffi.verify() if you use > "sources=[...]". Look up http://cffi.readthedocs.org for more > information. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu May 16 09:02:09 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 May 2013 09:02:09 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: <20130513183506.579e09e7@descartes> References: <20130512145059.6e057c7b@descartes> <20130513183506.579e09e7@descartes> Message-ID: Hi Emil, On Mon, May 13, 2013 at 6:35 PM, Emil Kroymann wrote: > I'm sorry, this is the only thing I tried so far. I just wanted to > try out gevent on pypy and only reached this point. Thanks for the report! I still have no clue why we didn't see a similar report earlier: any combination of cffi callbacks and stackless-like features explodes. So well, now it should be fixed both in "default" and in the branch "release-2.0.x" out of which we're planning to make the release 2.0.1 very soon. Is there any chance you can try it out? (If you didn't get the whole mercurial repository, you can download https://bitbucket.org/pypy/pypy/get/release-2.0.x.tar.bz2 .) Armin From spaans at fox-it.com Thu May 16 09:14:24 2013 From: spaans at fox-it.com (Jasper Spaans) Date: Thu, 16 May 2013 09:14:24 +0200 Subject: [pypy-dev] performance issue with context managers Message-ID: <51948750.1090608@fox-it.com> Hi list, I was toying around a bit with writing a statistical profiler in python, and came up with https://gist.github.com/jap/5584946 For reference, the main routine is: with Profiler() as p: with ProfilerContext("c1"): s = "" for t in range(100000): with ProfilerContext("c2"): s = s + "a" s = s + "b" print p.get_data() When running it on my local machine, this gives the following output: spaans at spaans-e6500:/tmp$ /usr/bin/time ~/xsrc/pypy-2.0/bin/pypy pmp.py Counter({'c1': 638}) 6.42user 0.42system 0:07.06elapsed 97%CPU (0avgtext+0avgdata 30160maxresident)k 0inputs+0outputs (0major+8383minor)pagefaults 0swaps spaans at spaans-e6500:/tmp$ /usr/bin/time python pmp.py Counter({'c1': 18, 'c2': 3}) 0.23user 0.02system 0:00.25elapsed 98%CPU (0avgtext+0avgdata 8200maxresident)k 0inputs+0outputs (0major+2226minor)pagefaults 0swaps So, two things seem to happen: the pypy version is almost 30 times slower than the python version (but hey, string appending has poor performance), and it somehow does not trigger the "c2" context.. Is the c2 context supposed to disappear? If I change the main loop to for t in range(100000): with ProfilerContext("c2"): s = s + "a" The output is still limited to spaans at spaans-e6500:/tmp$ /usr/bin/time ~/xsrc/pypy-2.0/bin/pypy pmp.py Counter({'c1': 156}) 1.61user 0.14system 0:01.78elapsed 98%CPU (0avgtext+0avgdata 30040maxresident)k which seems credible, because when I change it to for t in range(100000): s = s + "a" it's suddenly fast in pypy as well: 0.03user 0.01system 0:00.05elapsed 96%CPU (0avgtext+0avgdata 8024maxresident)k Removing all the threading.local stuff gives me the following performance data: spaans at spaans-e6500:/tmp$ /usr/bin/time python pmp.py Counter({'c1': 12, 'c2': 6}) 0.20user 0.01system 0:00.22elapsed 98%CPU (0avgtext+0avgdata 8192maxresident)k 0inputs+0outputs (0major+2224minor)pagefaults 0swaps spaans at spaans-e6500:/tmp$ /usr/bin/time ~/xsrc/pypy-2.0/bin/pypy pmp.py Counter({'c1': 621}) 6.18user 0.42system 0:06.76elapsed 97%CPU (0avgtext+0avgdata 30084maxresident)k which does not seem to differ that much. Finally, to rule out object creation issues, main was changed to: with Profiler() as p: with ProfilerContext("c1"): p2 = ProfilerContext("c2") s = "" for t in range(100000): with p2: s = s + "a" s = s + "b" print p.get_data() but that still behaves similar to the previous runs: spaans at spaans-e6500:/tmp$ /usr/bin/time ~/xsrc/pypy-2.0/bin/pypy pmp.py Counter({'c1': 624}) 6.25user 0.38system 0:06.69elapsed 99%CPU (0avgtext+0avgdata 29792maxresident)k spaans at spaans-e6500:/tmp$ /usr/bin/time python pmp.py Counter({'c2': 6, 'c1': 6}) 0.14user 0.02system 0:00.16elapsed 98%CPU (0avgtext+0avgdata 8188maxresident)k (all of this on my trusty old Dell E6500 running Ubuntu 13.04 on amd64) Could someone help me get "c2" registered, or is this expected behaviour? :) Cheers, Jasper -- /\____/\ ir. Jasper Spaans // Lead Developer DetACT \ (_)/ Fox-IT - For a more secure society! \ X T: +31-15-2847999 \ / \ M: +31-6-41588725 \/ KvK Haaglanden 27301624 From arigo at tunes.org Thu May 16 09:42:39 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 May 2013 09:42:39 +0200 Subject: [pypy-dev] performance issue with context managers In-Reply-To: <51948750.1090608@fox-it.com> References: <51948750.1090608@fox-it.com> Message-ID: Hi Jasper, On Thu, May 16, 2013 at 9:14 AM, Jasper Spaans wrote: > I was toying around a bit with writing a statistical profiler in python, > and came up with https://gist.github.com/jap/5584946 It's a statistical profiler based on signals: whenever a signal is delivered, it checks where it is and counts. What occurs is that the signal delivery points are a bit more restricted when running JITted code. The inner loop of your example: > for t in range(100000): > with ProfilerContext("c2"): > s = s + "a" is quickly compiled to machine code that does this: guard that t < 1000000 append "c2" to the list local_context_stack.data s = s + "a" remove the last item from local_context_stack.data guard that there was no signal jump back to the top of the loop So it only checks for signals once per loop at the end, instead of (as usual when interpreting) at random points during the loop. Signals will never be delivered when "c2" is in the local_context_stack... A bient?t, Armin. From fijall at gmail.com Thu May 16 11:45:20 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 16 May 2013 11:45:20 +0200 Subject: [pypy-dev] performance issue with context managers In-Reply-To: <51948750.1090608@fox-it.com> References: <51948750.1090608@fox-it.com> Message-ID: On Thu, May 16, 2013 at 9:14 AM, Jasper Spaans wrote: > Hi list, > > I was toying around a bit with writing a statistical profiler in python, > and came up with https://gist.github.com/jap/5584946 > > For reference, the main routine is: > > with Profiler() as p: > with ProfilerContext("c1"): > s = "" > for t in range(100000): > with ProfilerContext("c2"): > s = s + "a" > s = s + "b" > print p.get_data() Also, it's not that string concatenation has poor performance, it has quadratic performance. It's the same as in cpython if you made two references to s, your performance will plummet. (each s + 'a' would do a copy). We simply don't have the refcount hack Please use l.append('a') and later ''.join() From geertj at gmail.com Thu May 16 14:29:27 2013 From: geertj at gmail.com (Geert Jansen) Date: Thu, 16 May 2013 08:29:27 -0400 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: References: <20130512145059.6e057c7b@descartes> <20130513183506.579e09e7@descartes> Message-ID: Hi, On Thu, May 16, 2013 at 3:02 AM, Armin Rigo wrote: > Thanks for the report! I still have no clue why we didn't see a > similar report earlier: any combination of cffi callbacks and > stackless-like features explodes. Does the problem exist with CPython/cffi/greenlet too? I'm just about to start a module using these three. Thanks, Geert -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu May 16 17:21:06 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 May 2013 17:21:06 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: References: <20130512145059.6e057c7b@descartes> <20130513183506.579e09e7@descartes> Message-ID: Hi Emil, On Thu, May 16, 2013 at 2:29 PM, Geert Jansen wrote: > Does the problem exist with CPython/cffi/greenlet too? I'm just about to > start a module using these three. It most probably works. :-) A bient?t, Armin. From emil.kroymann at isaco.de Thu May 16 17:36:40 2013 From: emil.kroymann at isaco.de (Emil Kroymann) Date: Thu, 16 May 2013 17:36:40 +0200 Subject: [pypy-dev] Segfault with pypy-2.0, gevent, dnspython In-Reply-To: References: <20130512145059.6e057c7b@descartes> <20130513183506.579e09e7@descartes> Message-ID: <20130516173640.7ca3287c@descartes> Hi Armin, this simple test case works now without crashing :-) Regards, Emil Am Thu, 16 May 2013 09:02:09 +0200 schrieb Armin Rigo : > Hi Emil, > > On Mon, May 13, 2013 at 6:35 PM, Emil Kroymann > wrote: > > I'm sorry, this is the only thing I tried so far. I just wanted to > > try out gevent on pypy and only reached this point. > > Thanks for the report! I still have no clue why we didn't see a > similar report earlier: any combination of cffi callbacks and > stackless-like features explodes. So well, now it should be fixed > both in "default" and in the branch "release-2.0.x" out of which we're > planning to make the release 2.0.1 very soon. Is there any chance you > can try it out? (If you didn't get the whole mercurial repository, > you can download > https://bitbucket.org/pypy/pypy/get/release-2.0.x.tar.bz2 .) > > > > Armin > -- Emil Kroymann VoIP Services Engineer Email: emil.kroymann at isaco.de Tel: +49-30-203899885 Mobile: +49-151-62820588 ISACO GmbH Kurf?rstenstra?e 79 10787 Berlin Germany Amtsgericht Charlottenburg, HRB 112464B Gesch?ftsf?hrer: Daniel Frommherz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From arigo at tunes.org Thu May 16 19:10:03 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 16 May 2013 19:10:03 +0200 Subject: [pypy-dev] PyPy 2.0.1 released Message-ID: ============================== PyPy 2.0.1 - Bohr Sm?rrebr?d ============================== We're pleased to announce PyPy 2.0.1. This is a stable bugfix release over 2.0. You can download it here: http://pypy.org/download.html The fixes are mainly about fatal errors or crashes in our stdlib. See below for more details. What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast due to its integrated tracing JIT compiler. (pypy 2.0 and cpython 2.7.3 performance comparison: http://speed.pypy.org) This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows 32. Support for ARM is progressing but not bug-free yet. Highlights ========== - fix an occasional crash in the JIT that ends in `RPython Fatal error: NotImplementedError` (https://bugs.pypy.org/issue1482). - `id(x)` is now always a positive number (except on int/float/long/complex). This fixes an issue in `_sqlite.py` (mostly for 32-bit Linux). - fix crashes of callback-from-C-functions (with cffi) when used together with Stackless features, on asmgcc (i.e. Linux only). Now gevent should work better (http://mail.python.org/pipermail/pypy-dev/2013-May/011362.html). - work around an eventlet issue with `socket._decref_socketios()` (https://bugs.pypy.org/issue1468). Cheers, arigo et. al. for the PyPy team From mrrileyx at gmail.com Sat May 18 20:03:47 2013 From: mrrileyx at gmail.com (sean riley) Date: Sat, 18 May 2013 11:03:47 -0700 Subject: [pypy-dev] pypy performance on fractal terrain generator. Message-ID: FYI. Performance comparison of pypy and CPython using a fractal terrain generator and writing out a png file. Pypy is 6-11x faster on my Ubuntu Intel system. For 1024x1024 map: sriley at xxxy:/data/src/cityserver$ time pypy map.py real 0m0.656s user 0m0.596s sys 0m0.052s sriley at xxx:/data/src/cityserver$ time python map.py real 0m4.189s user 0m4.132s sys 0m0.044s For 4096x4096 map: sriley at xxx:/data/src/cityserver$ time pypy map.py real 0m6.511s user 0m6.152s sys 0m0.328s sriley at xxx:/data/src/cityserver$ time python map.py real 1m7.641s user 1m7.124s sys 0m0.324s Sean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: map.py Type: application/octet-stream Size: 459 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: terrain.py Type: application/octet-stream Size: 3811 bytes Desc: not available URL: From fijall at gmail.com Sat May 18 20:10:29 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 18 May 2013 20:10:29 +0200 Subject: [pypy-dev] pypy performance on fractal terrain generator. In-Reply-To: References: Message-ID: On Sat, May 18, 2013 at 8:03 PM, sean riley wrote: > FYI. Performance comparison of pypy and CPython using a fractal terrain > generator and writing out a png file. Pypy is 6-11x faster on my Ubuntu > Intel system. > > > For 1024x1024 map: > > sriley at xxxy:/data/src/cityserver$ time pypy map.py > real 0m0.656s > user 0m0.596s > sys 0m0.052s > > sriley at xxx:/data/src/cityserver$ time python map.py > real 0m4.189s > user 0m4.132s > sys 0m0.044s > > For 4096x4096 map: > > sriley at xxx:/data/src/cityserver$ time pypy map.py > real 0m6.511s > user 0m6.152s > sys 0m0.328s > > sriley at xxx:/data/src/cityserver$ time python map.py > real 1m7.641s > user 1m7.124s > sys 0m0.324s > > > Sean. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Cool, thanks for sharing! Cheers, fijal PS. No, it's not a spam message despite looking like one, genuinely thanks From matti.picus at gmail.com Sat May 18 20:51:00 2013 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 18 May 2013 21:51:00 +0300 Subject: [pypy-dev] Is cpyext dead forever? Message-ID: <5197CD94.5080803@gmail.com> cpyext is currently disabled on default. Is there a branch to revive it? Or are we expecting cffi to replace all interaction with external c code? I'm not sure what will be more painful, getting the world to rewrite all modules or supporting cpyext, both seem non-trivial. (I'm opening the discussion here since I'm not sure if everyone was heard in the discussion on IRC) Matti From fijall at gmail.com Sat May 18 21:45:36 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 18 May 2013 21:45:36 +0200 Subject: [pypy-dev] Is cpyext dead forever? In-Reply-To: <5197CD94.5080803@gmail.com> References: <5197CD94.5080803@gmail.com> Message-ID: On Sat, May 18, 2013 at 8:51 PM, Matti Picus wrote: > cpyext is currently disabled on default. Is there a branch to revive it? > Or are we expecting cffi to replace all interaction with external c code? > I'm not sure what will be more painful, getting the world to rewrite all > modules or supporting cpyext, both seem non-trivial. > (I'm opening the discussion here since I'm not sure if everyone was heard in > the discussion on IRC) > Matti It's disabled because I broke it. I'll unbreak it some time soon, it's just that issue is involved. I'm sorry about that, but there was no goal to deprecate it any time soon. Cheers, fijal From matti.picus at gmail.com Sat May 18 21:49:42 2013 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 18 May 2013 22:49:42 +0300 Subject: [pypy-dev] Is cpyext dead forever? In-Reply-To: References: <5197CD94.5080803@gmail.com> Message-ID: <5197DB56.5070704@gmail.com> An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat May 18 21:57:13 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 18 May 2013 21:57:13 +0200 Subject: [pypy-dev] Is cpyext dead forever? In-Reply-To: <5197DB56.5070704@gmail.com> References: <5197CD94.5080803@gmail.com> <5197DB56.5070704@gmail.com> Message-ID: On Sat, May 18, 2013 at 9:49 PM, Matti Picus wrote: > > On 18/05/2013 10:45 PM, Maciej Fijalkowski wrote: > > On Sat, May 18, 2013 at 8:51 PM, Matti Picus wrote: > > cpyext is currently disabled on default. Is there a branch to revive it? > Or are we expecting cffi to replace all interaction with external c code? > I'm not sure what will be more painful, getting the world to rewrite all > modules or supporting cpyext, both seem non-trivial. > (I'm opening the discussion here since I'm not sure if everyone was heard in > the discussion on IRC) > Matti > > It's disabled because I broke it. I'll unbreak it some time soon, it's > just that issue is involved. I'm sorry about that, but there was no > goal to deprecate it any time soon. > > Cheers, > fijal > > thanks, no hurry, I just misunderstood. > Matti In a way it should not have happened at all. But also in a way I would argue cpyext is broken (annotation-wise) and my changes merely exposed brokenness. It's still my duty to fix it though ;-) Cheers, fijal From arigo at tunes.org Tue May 21 11:27:11 2013 From: arigo at tunes.org (Armin Rigo) Date: Tue, 21 May 2013 11:27:11 +0200 Subject: [pypy-dev] PyPy 2.0.2 released Message-ID: Hi all, The bugfix PyPy 2.0.2 has been released (on all platforms but OS/X which should come later in the day). ========================= PyPy 2.0.2 - Fermi Panini ========================= We're pleased to announce PyPy 2.0.2. This is a stable bugfix release over 2.0 and 2.0.1. You can download it here: http://pypy.org/download.html It fixes a crash in the JIT when calling external C functions (with ctypes/cffi) in a multithreaded context. What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast (pypy 2.0 and cpython 2.7.3 performance comparison: http://speed.pypy.org) due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows 32. Support for ARM is progressing but not bug-free yet. Highlights ========== This release contains only the fix described above. A crash (or wrong results) used to occur if all these conditions were true: - your program is multithreaded; - it runs on a single-core machine or a heavily-loaded multi-core one; - it uses ctypes or cffi to issue external calls to C functions. This was fixed in the branch emit-call-x86 (see the example file bug1.py: https://bitbucket.org/pypy/pypy/commits/7c80121abbf4). Cheers, arigo et. al. for the PyPy team From jameslan at gmail.com Wed May 22 08:11:30 2013 From: jameslan at gmail.com (James Lan) Date: Tue, 21 May 2013 23:11:30 -0700 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> <518AC2E9.6010200@gmx.de> <518B57C5.10001@gmail.com> Message-ID: I really wish pypy is able to support jvm, cli and dalvik, so that it is possible to write core business logic in python which runs everywhere. Is there any plan to implement a better ootype as well as OO backends? On Thu, May 9, 2013 at 2:19 AM, Armin Rigo wrote: > Hi all, > > On Thu, May 9, 2013 at 10:01 AM, Antonio Cuni wrote: > > Although I have an emotional feeling with that piece of code, I think > that > > Alex is right. > > I also tend to agree. Killing stuff that nobody seriously cares about > is sad but good, particularly when it adds some otherwise-unnecessary > levels of abstractions everywhere. We should ideally wait e.g. one > month for feedback from other developers that may still have plans > there. > > And no, before someone asks, asmjs wouldn't need the OO backend but > more likely hacks on top of the LL backend. The OO-vs-LL levels of > abstractions are wrong there. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Wed May 22 08:23:03 2013 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Wed, 22 May 2013 16:23:03 +1000 Subject: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend) In-Reply-To: References: <1507130.Uoj2OIuUtg@dragon.dg> <1725112.nlyumnnUK3@dragon.dg> <518AC2E9.6010200@gmx.de> <518B57C5.10001@gmail.com> Message-ID: On 22 May 2013 16:11, James Lan wrote: > I really wish pypy is able to support jvm, cli and dalvik, so that it is > possible to write core business logic in python which runs everywhere. > > Is there any plan to implement a better ootype as well as OO backends? Nobody is working on any of the OO backends at the moment. It might be worthwhile to document what we mean by 'from scratch', too. Graphs that haven't been rtyped actually have most of the information that oo-style or untyped backends might care about, the most significant things missing probably involve dealing with constants as well as the extregistry. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely may reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to deny you those rights would be illegal without prior contractual agreement. From senyai at gmail.com Wed May 22 17:04:45 2013 From: senyai at gmail.com (Arseniy Terekhin) Date: Wed, 22 May 2013 19:04:45 +0400 Subject: [pypy-dev] PyPy logo update Message-ID: Hi! Let's update the logo to have `PyPy` instead of `pypy`. Too see my idea go to http://pypy.org and execute this javascript (Ctrl+Shift+J for Chrome, Ctrl+Shift+K for Firefox): $('#header').prepend('

PyPy

'); Any objection? If no then I'll make a pull request. -- Best regards, Arseniy Terekhin From arigo at tunes.org Fri May 24 01:00:49 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 24 May 2013 01:00:49 +0200 Subject: [pypy-dev] Python 3 ... Message-ID: Hi all, Unrelated to everything, a comment about Python 3's unrivalled syntax. You can now be hesitant in your programs! Try it out: if len(x) > 0 and... and... and x[0] == 5: More seriously, I'm used to type "..." somewhere to mean "fix me first!". I didn't move to Python 3 so far, but if I had to pick a reason, this one would be high on the list. Now the program will associate a useless meaning to my "..." and try to execute it. Even worse, in the usual case (a line containing only "...") it will work: the line is completely ignored! You can write this: def foo(): x = ... ... and actually execute foo() without getting any error. Likely you'll end with a crash later because this call to foo() didn't have the expected effect, which you did not implement so far. Same with "assert...", which just passes. Great! I guess I just have to remember: use 4 dots, never 3. Armin From paulo.koch at gmail.com Fri May 24 01:21:25 2013 From: paulo.koch at gmail.com (=?UTF-8?Q?Paulo_K=C3=B6ch?=) Date: Fri, 24 May 2013 00:21:25 +0100 Subject: [pypy-dev] Python 3 ... In-Reply-To: References: Message-ID: I'll save a google search to a lot of people. http://docs.python.org/dev/library/constants.html#Ellipsis http://stackoverflow.com/questions/772124/what-does-the-python-ellipsis-object-do -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Fri May 24 04:29:51 2013 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 23 May 2013 22:29:51 -0400 Subject: [pypy-dev] Python 3 ... In-Reply-To: References: Message-ID: On Thu, May 23, 2013 at 7:00 PM, Armin Rigo wrote: > > I guess I just have to remember: use 4 dots, never 3. .. or 2 :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From msh.computing at gmail.com Fri May 24 07:36:51 2013 From: msh.computing at gmail.com (Steve Kieu) Date: Fri, 24 May 2013 15:36:51 +1000 Subject: [pypy-dev] Fail to compile psycopg2 Message-ID: Hello All, I tried to install psycopg2 but failed with the following message building 'psycopg2._psycopg' extension cc -O2 -fPIC -Wimplicit -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090109 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/home/stevek/pypy/include -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement In file included from psycopg/psycopgmodule.c:38:0: ./psycopg/error.h:32:5: error: unknown type name ?PyBaseExceptionObject? error: command 'cc' failed with exit status 1 Looks like the python header from pypy does not declare PyBaseExceptionObject Any way to fix this? Thanks in adanvace, -- Steve Kieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Fri May 24 07:40:59 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 24 May 2013 01:40:59 -0400 Subject: [pypy-dev] Fail to compile psycopg2 In-Reply-To: References: Message-ID: Hi Steve, In general you probably want to avoid C-extensions when running under PyPy. In this case I reccomend using psycopg2cffi instead: https://pypi.python.org/pypi/psycopg2cffi it's basically a drop-in replacement and works well under PyPy. Alex On Fri, May 24, 2013 at 1:36 AM, Steve Kieu wrote: > Hello All, > > I tried to install psycopg2 but failed with the following message > > > building 'psycopg2._psycopg' extension > > cc -O2 -fPIC -Wimplicit -DPSYCOPG_DEFAULT_PYDATETIME=1 > -DPSYCOPG_VERSION="2.5 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090109 > -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 > -I/home/stevek/pypy/include -I. -I/usr/include/postgresql > -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o > build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o > -Wdeclaration-after-statement > > In file included from psycopg/psycopgmodule.c:38:0: > > ./psycopg/error.h:32:5: error: unknown type name ?PyBaseExceptionObject? > > error: command 'cc' failed with exit status 1 > > > Looks like the python header from pypy does not declare > PyBaseExceptionObject > > Any way to fix this? > > Thanks in adanvace, > > > > -- > Steve Kieu > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From msh.computing at gmail.com Fri May 24 07:48:46 2013 From: msh.computing at gmail.com (Steve Kieu) Date: Fri, 24 May 2013 15:48:46 +1000 Subject: [pypy-dev] Fail to compile psycopg2 In-Reply-To: References: Message-ID: Thanks, It install cleanly (have not tried to test by making some connection etc.. but hopefuly all good) The MySQL-python works though cheers On Fri, May 24, 2013 at 3:40 PM, Alex Gaynor wrote: > Hi Steve, > > In general you probably want to avoid C-extensions when running under > PyPy. In this case I reccomend using psycopg2cffi instead: > https://pypi.python.org/pypi/psycopg2cffi it's basically a drop-in > replacement and works well under PyPy. > > Alex > > > On Fri, May 24, 2013 at 1:36 AM, Steve Kieu wrote: > >> Hello All, >> >> I tried to install psycopg2 but failed with the following message >> >> >> building 'psycopg2._psycopg' extension >> >> cc -O2 -fPIC -Wimplicit -DPSYCOPG_DEFAULT_PYDATETIME=1 >> -DPSYCOPG_VERSION="2.5 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090109 >> -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 >> -I/home/stevek/pypy/include -I. -I/usr/include/postgresql >> -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o >> build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o >> -Wdeclaration-after-statement >> >> In file included from psycopg/psycopgmodule.c:38:0: >> >> ./psycopg/error.h:32:5: error: unknown type name ?PyBaseExceptionObject? >> >> error: command 'cc' failed with exit status 1 >> >> >> Looks like the python header from pypy does not declare >> PyBaseExceptionObject >> >> Any way to fix this? >> >> Thanks in adanvace, >> >> >> >> -- >> Steve Kieu >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > > > -- > "I disapprove of what you say, but I will defend to the death your right > to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: 125F 5C67 DFE9 4084 > -- Steve Kieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From chef at ghum.de Fri May 24 08:26:54 2013 From: chef at ghum.de (Massa, Harald Armin) Date: Fri, 24 May 2013 08:26:54 +0200 Subject: [pypy-dev] Python 3 ... In-Reply-To: References: Message-ID: > > I guess I just have to remember: use 4 dots, never 3. which goes great with indention, you shall use 4 spaces to indent, never 3. Harald -- GHUM GmbH Harald Armin Massa Spielberger Stra?e 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaxinx at gmail.com Fri May 24 08:45:20 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Fri, 24 May 2013 14:45:20 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy Message-ID: Hi, When I use cppyy to connect the C++ with the python, I got a failure. Following this page, https://pypy.readthedocs.org/en/improve-docs/cppyy.html Define a new class $ cat MyClass.h class MyClass { public: MyClass(int i = -99) : m_myint(i) {} int GetMyInt() { return m_myint; } void SetMyInt(int i) { m_myint = i; } public: int m_myint; }; then compile it. $ genreflex MyClass.h $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include MyClass_rflx.cpp -o libMyClassDict.so -L$REFLEXHOME/lib -lReflex $ ls libMyClassDict.so MyClass.h MyClass_rflx.cpp BUT, when I tried to load it in pypy-c, error occurred. $ pypy-c >>>> import cppyy >>>> cppyy.load_reflection_info("libMyClassDict.so") Traceback (most recent call last): File "", line 1, in RuntimeError: libMyClassDict.so: cannot open shared object file: No such file or directory Help me please? Thanks. From arigo at tunes.org Fri May 24 10:58:41 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 24 May 2013 10:58:41 +0200 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, On Fri, May 24, 2013 at 8:45 AM, Xia Xin wrote: >>>>> cppyy.load_reflection_info("libMyClassDict.so") I believe that you need to say "./libMyClassDict.so". Otherwise it's searching for the .so in the system's standard places, which do not include ".". A bient?t, Armin. From wlavrijsen at lbl.gov Fri May 24 19:08:16 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 24 May 2013 10:08:16 -0700 (PDT) Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, > On Fri, May 24, 2013 at 8:45 AM, Xia Xin wrote: > I believe that you need to say "./libMyClassDict.so". Otherwise it's > searching for the .so in the system's standard places, which do not > include ".". yes, or add '.' to LD_LIBRARY_PATH. The call is basically just a dlopen: internally, it uses libffi.CDLL(). Note that if the automatic class loader is used, the same rules apply, as .rootmap files available through LD_LIBRARY_PATH are used for auto-loading. I've clarified this in the documentation. Thanks, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From xiaxinx at gmail.com Sat May 25 04:02:14 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Sat, 25 May 2013 10:02:14 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, Wim I tried, but when I use the automatic class loader, the problem still exists. HERE is the error. 1. $ echo $LD_LIBRARY_PATH /home/GeV/work/hello/v1/src:/opt/root/lib/root $ ls /home/GeV/work/hello/v1/src libMyClassDict.rootmap libMyClassDict.so MyClass.h MyClass_rflx.cpp >>>> import cppyy >>>> myinst = cppyy.gbl.MyClass(42) Traceback (most recent call last): File "", line 1, in AttributeError: object has no attribute 'MyClass' (details: '' has no attribute 'MyClass') 2. TRY explicit load statements, Just as that Armin told me, It's OK, works perfectly $ echo $LD_LIBRARY_PATH /opt/root/lib/root >>>> import cppyy >>>> cppyy.load_reflection_info("./libMyClassDict.so") >>>> myinst = cppyy.gbl.MyClass(42) 42 3. Use LD_LIBRARY_PATH to clarify the path to libMyClassDict.so, still error $ echo $LD_LIBRARY_PATH /home/GeV/work/hello/v1/src:/opt/root/lib/root >>>> import cppyy >>>> cppyy.load_reflection_info("libMyClassDict.so") Traceback (most recent call last): File "", line 1, in RuntimeError: libMyClassDict.so: cannot open shared object file: No such file or directory I don't understand why the LD_LIBRARY_PATH did not work well on my PC. Waiting for your reply. Thanks! Best wishes, Xia Xin 2013/5/25 : > Hi, > >> On Fri, May 24, 2013 at 8:45 AM, Xia Xin wrote: >> I believe that you need to say "./libMyClassDict.so". Otherwise it's >> searching for the .so in the system's standard places, which do not >> include ".". > > > yes, or add '.' to LD_LIBRARY_PATH. The call is basically just a dlopen: > internally, it uses libffi.CDLL(). > > Note that if the automatic class loader is used, the same rules apply, as > .rootmap files available through LD_LIBRARY_PATH are used for auto-loading. > > I've clarified this in the documentation. > > Thanks, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Sat May 25 06:56:19 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 24 May 2013 21:56:19 -0700 (PDT) Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, > 1. > > $ echo $LD_LIBRARY_PATH > /home/GeV/work/hello/v1/src:/opt/root/lib/root > > $ ls /home/GeV/work/hello/v1/src > libMyClassDict.rootmap libMyClassDict.so MyClass.h MyClass_rflx.cpp > >>>>> import cppyy >>>>> myinst = cppyy.gbl.MyClass(42) > Traceback (most recent call last): > File "", line 1, in > AttributeError: object has no attribute > 'MyClass' (details: '' has no attribute > 'MyClass') > > 2. TRY explicit load statements, Just as that Armin told me, It's OK, > works perfectly > > $ echo $LD_LIBRARY_PATH > /opt/root/lib/root > >>>>> import cppyy >>>>> cppyy.load_reflection_info("./libMyClassDict.so") >>>>> myinst = cppyy.gbl.MyClass(42) > 42 > > 3. Use LD_LIBRARY_PATH to clarify the path to libMyClassDict.so, still error > > $ echo $LD_LIBRARY_PATH > /home/GeV/work/hello/v1/src:/opt/root/lib/root > >>>>> import cppyy >>>>> cppyy.load_reflection_info("libMyClassDict.so") > Traceback (most recent call last): > File "", line 1, in > RuntimeError: libMyClassDict.so: cannot open shared object file: No > such file or directory > > > I don't understand why the LD_LIBRARY_PATH did not work well on my PC. neither do I, unless /home/GeV/work/hello/v1/src is not the current work directory and you have two libMyClassDict.so. What does: $ ldd /home/GeV/work/hello/v1/src/libMyClassDict.so give? Any libraries to be linked not found? Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From xiaxinx at gmail.com Sat May 25 07:16:07 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Sat, 25 May 2013 13:16:07 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, I am sure /home/GeV/work/hello/v1/src is the current and only work directory. $ ldd libMyClassDict.so linux-vdso.so.1 => (0x00007fff9e598000) libReflex.so.0 => /opt/root/lib/root/libReflex.so.0 (0x00007f7f2e4ec000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f7f2e1d0000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f7f2dfb9000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7f2dbfa000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f7f2d9f6000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f7f2d6f9000) /lib64/ld-linux-x86-64.so.2 (0x00007f7f2e98e000) All needed libraries exist. Errors occur only when I try to use automatic class loader. Best regards, Xia Xin 2013/5/25 : > Hi, > > >> 1. >> >> $ echo $LD_LIBRARY_PATH >> /home/GeV/work/hello/v1/src:/opt/root/lib/root >> >> $ ls /home/GeV/work/hello/v1/src >> libMyClassDict.rootmap libMyClassDict.so MyClass.h >> MyClass_rflx.cpp >> >>>>>> import cppyy >>>>>> myinst = cppyy.gbl.MyClass(42) >> >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: object has no attribute >> 'MyClass' (details: '' has no attribute >> 'MyClass') >> >> 2. TRY explicit load statements, Just as that Armin told me, It's OK, >> works perfectly >> >> $ echo $LD_LIBRARY_PATH >> /opt/root/lib/root >> >>>>>> import cppyy >>>>>> cppyy.load_reflection_info("./libMyClassDict.so") >>>>>> myinst = cppyy.gbl.MyClass(42) >> >> 42 >> >> 3. Use LD_LIBRARY_PATH to clarify the path to libMyClassDict.so, still >> error >> >> $ echo $LD_LIBRARY_PATH >> /home/GeV/work/hello/v1/src:/opt/root/lib/root >> >>>>>> import cppyy >>>>>> cppyy.load_reflection_info("libMyClassDict.so") >> >> Traceback (most recent call last): >> File "", line 1, in >> RuntimeError: libMyClassDict.so: cannot open shared object file: No >> such file or directory >> >> >> I don't understand why the LD_LIBRARY_PATH did not work well on my PC. > > > neither do I, unless /home/GeV/work/hello/v1/src is not the current work > directory and you have two libMyClassDict.so. > > What does: > > $ ldd /home/GeV/work/hello/v1/src/libMyClassDict.so > > give? Any libraries to be linked not found? > > Best regards, > > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Sat May 25 07:23:26 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 24 May 2013 22:23:26 -0700 (PDT) Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, > All needed libraries exist. Errors occur only when I try to use > automatic class loader. puzzling ... Only thing I can think of is to use LD_DEBUG=files (or with LD_DEBUG=symbols) and see whether either gives an indication as to what is wrong (a library or directory not being considered for loading, or a symbol missing, or ... ). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From xiaxinx at gmail.com Sat May 25 07:58:52 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Sat, 25 May 2013 13:58:52 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, Sorry, I did not find any mistakes except that AttributeError. So I think maybe there are some problems in my OS. Thanks for your help. if you like, I can also give you an acess to my PC(Newly installed, no secret. :D). Best regards, Xia Xin 2013/5/25 : > Hi, > > >> All needed libraries exist. Errors occur only when I try to use >> automatic class loader. > > > puzzling ... Only thing I can think of is to use LD_DEBUG=files (or with > LD_DEBUG=symbols) and see whether either gives an indication as to what is > wrong (a library or directory not being considered for loading, or a symbol > missing, or ... ). > > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Sat May 25 21:45:48 2013 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sat, 25 May 2013 12:45:48 -0700 (PDT) Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, > Sorry, I did not find any mistakes except that AttributeError. So I > think maybe there are some problems in my OS. there may still be another problem lurking around that the AttributeError is hiding (and which I'd consider a bug :} ). Could you try the various cases using ctypes.CDLL(), either with CPython or PyPy? >>> import ctypes >>> ctypes.CDLL('libMyClassDict.so') If there's an issue with the error reporting, then this may show it. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From xiaxinx at gmail.com Sun May 26 03:20:26 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Sun, 26 May 2013 09:20:26 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, This is the result. It shows that the pypy-c does not search the LD_LIBRARY_PATH env. >>>> import ctypes >>>> ctypes.CDLL('libMyClassDict.so') Traceback (most recent call last): File "", line 1, in File "/opt/pypy/lib-python/2.7/ctypes/__init__.py", line 367, in __init__ self._handle = _ffi.CDLL(name, mode) OSError: libMyClassDict.so: libMyClassDict.so: cannot open shared object file: No such file or directory >>>> ctypes.CDLL('./libMyClassDict.so') at 4895da8> maybe I make a mistake in compiling the pypy-c? I did that like this. $ hg clone https://bitbucket.org/pypy/pypy $ cd pypy $ hg up reflex-support $ pypy ../../rpython/bin/rpython --Ojit targetpypystandalone --withmod-cppyy Thank you! Best regards, Xia Xin 2013/5/26 : > Hi, > >> Sorry, I did not find any mistakes except that AttributeError. So I >> think maybe there are some problems in my OS. > > > there may still be another problem lurking around that the AttributeError > is hiding (and which I'd consider a bug :} ). > > Could you try the various cases using ctypes.CDLL(), either with CPython > or PyPy? > > >>> import ctypes > >>> ctypes.CDLL('libMyClassDict.so') > > If there's an issue with the error reporting, then this may show it. > > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From roberto at unbit.it Sun May 26 08:50:38 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Sun, 26 May 2013 08:50:38 +0200 Subject: [pypy-dev] uWSGI 1.9.11 released with PyPy support Message-ID: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> Hi everyone, thanks to the effort of Maciej Fijalkowski (and contributions of Alex Gaynor and Armin Rigo) we now have a fully working PyPy plugin into uWSGI. The plugin works via cffi, and already supports multithreading and a good set of the uWSGI api (like caching and rpc) In the next few days i will start spamming major PaaS/ISPs using uWSGI (included my company) to ask them to offer an option to use PyPy in their services. I will probably start with pythonanywhere.com, as they make a massive use of unique uWSGI features and already offer PyPy as shell. The plugin requires nightly builds and libpypy-c.so, so (for now, i hope it will change soon ;) you need to build/translate it on your own (i have added some prebuilt version in my download area, but they are ubuntu-specific). Documentation: http://uwsgi-docs.readthedocs.org/en/latest/PyPy.html Benchmarks: http://uwsgi-docs.readthedocs.org/en/latest/PyPy_benchmarks.html Thanks a lot -- Roberto De Ioris http://unbit.it From phyo.arkarlwin at gmail.com Sun May 26 12:20:44 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 26 May 2013 16:50:44 +0630 Subject: [pypy-dev] uWSGI 1.9.11 released with PyPy support In-Reply-To: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> References: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> Message-ID: wow good job roberto and pypy team. that will surely increase adoption of pypy. so zeromq working in pypy now too? On May 26, 2013 1:26 PM, "Roberto De Ioris" wrote: > > Hi everyone, thanks to the effort of Maciej Fijalkowski (and contributions > of Alex Gaynor and Armin Rigo) we now have a fully working PyPy plugin > into uWSGI. > > The plugin works via cffi, and already supports multithreading and a good > set of the uWSGI api (like caching and rpc) > > In the next few days i will start spamming major PaaS/ISPs using uWSGI > (included my company) to ask them to offer an option to use PyPy in their > services. I will probably start with pythonanywhere.com, as they make a > massive use of unique uWSGI features and already offer PyPy as shell. > > The plugin requires nightly builds and libpypy-c.so, so (for now, i hope > it will change soon ;) you need to build/translate it on your own (i have > added some prebuilt version in my download area, but they are > ubuntu-specific). > > Documentation: http://uwsgi-docs.readthedocs.org/en/latest/PyPy.html > > Benchmarks: > http://uwsgi-docs.readthedocs.org/en/latest/PyPy_benchmarks.html > > Thanks a lot > > -- > Roberto De Ioris > http://unbit.it > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun May 26 13:43:35 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 26 May 2013 13:43:35 +0200 Subject: [pypy-dev] uWSGI 1.9.11 released with PyPy support In-Reply-To: References: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> Message-ID: On Sun, May 26, 2013 at 12:20 PM, Phyo Arkar wrote: > wow good job roberto and pypy team. > that will surely increase adoption of pypy. > so zeromq working in pypy now too? that has been true for at least a while now > > On May 26, 2013 1:26 PM, "Roberto De Ioris" wrote: >> >> >> Hi everyone, thanks to the effort of Maciej Fijalkowski (and contributions >> of Alex Gaynor and Armin Rigo) we now have a fully working PyPy plugin >> into uWSGI. >> >> The plugin works via cffi, and already supports multithreading and a good >> set of the uWSGI api (like caching and rpc) >> >> In the next few days i will start spamming major PaaS/ISPs using uWSGI >> (included my company) to ask them to offer an option to use PyPy in their >> services. I will probably start with pythonanywhere.com, as they make a >> massive use of unique uWSGI features and already offer PyPy as shell. >> >> The plugin requires nightly builds and libpypy-c.so, so (for now, i hope >> it will change soon ;) you need to build/translate it on your own (i have >> added some prebuilt version in my download area, but they are >> ubuntu-specific). >> >> Documentation: http://uwsgi-docs.readthedocs.org/en/latest/PyPy.html >> >> Benchmarks: >> http://uwsgi-docs.readthedocs.org/en/latest/PyPy_benchmarks.html >> >> Thanks a lot >> >> -- >> Roberto De Ioris >> http://unbit.it >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From msh.computing at gmail.com Sun May 26 13:55:48 2013 From: msh.computing at gmail.com (Steve Kieu) Date: Sun, 26 May 2013 21:55:48 +1000 Subject: [pypy-dev] uWSGI 1.9.11 released with PyPy support In-Reply-To: References: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> Message-ID: Great, compilation has been fixed !! On Sun, May 26, 2013 at 9:50 PM, Steve Kieu wrote: > > I was unable to compile it just several days ago - will try soon to see if > it got fixed > > > > > On Sun, May 26, 2013 at 9:43 PM, Maciej Fijalkowski wrote: > >> On Sun, May 26, 2013 at 12:20 PM, Phyo Arkar >> wrote: >> > wow good job roberto and pypy team. >> > that will surely increase adoption of pypy. >> > so zeromq working in pypy now too? >> >> that has been true for at least a while now >> >> > >> > On May 26, 2013 1:26 PM, "Roberto De Ioris" wrote: >> >> >> >> >> >> Hi everyone, thanks to the effort of Maciej Fijalkowski (and >> contributions >> >> of Alex Gaynor and Armin Rigo) we now have a fully working PyPy plugin >> >> into uWSGI. >> >> >> >> The plugin works via cffi, and already supports multithreading and a >> good >> >> set of the uWSGI api (like caching and rpc) >> >> >> >> In the next few days i will start spamming major PaaS/ISPs using uWSGI >> >> (included my company) to ask them to offer an option to use PyPy in >> their >> >> services. I will probably start with pythonanywhere.com, as they make >> a >> >> massive use of unique uWSGI features and already offer PyPy as shell. >> >> >> >> The plugin requires nightly builds and libpypy-c.so, so (for now, i >> hope >> >> it will change soon ;) you need to build/translate it on your own (i >> have >> >> added some prebuilt version in my download area, but they are >> >> ubuntu-specific). >> >> >> >> Documentation: http://uwsgi-docs.readthedocs.org/en/latest/PyPy.html >> >> >> >> Benchmarks: >> >> http://uwsgi-docs.readthedocs.org/en/latest/PyPy_benchmarks.html >> >> >> >> Thanks a lot >> >> >> >> -- >> >> Roberto De Ioris >> >> http://unbit.it >> >> _______________________________________________ >> >> pypy-dev mailing list >> >> pypy-dev at python.org >> >> http://mail.python.org/mailman/listinfo/pypy-dev >> > >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > http://mail.python.org/mailman/listinfo/pypy-dev >> > >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > > > -- > Steve Kieu > -- Steve Kieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From msh.computing at gmail.com Sun May 26 13:50:34 2013 From: msh.computing at gmail.com (Steve Kieu) Date: Sun, 26 May 2013 21:50:34 +1000 Subject: [pypy-dev] uWSGI 1.9.11 released with PyPy support In-Reply-To: References: <32c09e2292c3a2f9ca2be6f376be850a.squirrel@manage.unbit.it> Message-ID: I was unable to compile it just several days ago - will try soon to see if it got fixed On Sun, May 26, 2013 at 9:43 PM, Maciej Fijalkowski wrote: > On Sun, May 26, 2013 at 12:20 PM, Phyo Arkar > wrote: > > wow good job roberto and pypy team. > > that will surely increase adoption of pypy. > > so zeromq working in pypy now too? > > that has been true for at least a while now > > > > > On May 26, 2013 1:26 PM, "Roberto De Ioris" wrote: > >> > >> > >> Hi everyone, thanks to the effort of Maciej Fijalkowski (and > contributions > >> of Alex Gaynor and Armin Rigo) we now have a fully working PyPy plugin > >> into uWSGI. > >> > >> The plugin works via cffi, and already supports multithreading and a > good > >> set of the uWSGI api (like caching and rpc) > >> > >> In the next few days i will start spamming major PaaS/ISPs using uWSGI > >> (included my company) to ask them to offer an option to use PyPy in > their > >> services. I will probably start with pythonanywhere.com, as they make a > >> massive use of unique uWSGI features and already offer PyPy as shell. > >> > >> The plugin requires nightly builds and libpypy-c.so, so (for now, i hope > >> it will change soon ;) you need to build/translate it on your own (i > have > >> added some prebuilt version in my download area, but they are > >> ubuntu-specific). > >> > >> Documentation: http://uwsgi-docs.readthedocs.org/en/latest/PyPy.html > >> > >> Benchmarks: > >> http://uwsgi-docs.readthedocs.org/en/latest/PyPy_benchmarks.html > >> > >> Thanks a lot > >> > >> -- > >> Roberto De Ioris > >> http://unbit.it > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- Steve Kieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaxinx at gmail.com Mon May 27 08:20:10 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Mon, 27 May 2013 14:20:10 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, I think I find the problem. The linux distro I am using is Ubuntu. In the distro, the default library search path is not controlled by LD_LIBRARY_PATH, but /etc/ld.so.conf file! Mad. So, I can load the librarys in /usr/lib with just a name, but have to give a path to the library in my home. Apologize for disturbing you, Thank you all! Best regards, Xia Xin 2013/5/26 Xia Xin : > Hi, > > This is the result. It shows that the pypy-c does not search the > LD_LIBRARY_PATH env. > >>>>> import ctypes >>>>> ctypes.CDLL('libMyClassDict.so') > Traceback (most recent call last): > File "", line 1, in > File "/opt/pypy/lib-python/2.7/ctypes/__init__.py", line 367, in __init__ > self._handle = _ffi.CDLL(name, mode) > OSError: libMyClassDict.so: libMyClassDict.so: cannot open shared > object file: No such file or directory >>>>> ctypes.CDLL('./libMyClassDict.so') > 0x000000000493aaa0> at 4895da8> > > > maybe I make a mistake in compiling the pypy-c? I did that like this. > > $ hg clone https://bitbucket.org/pypy/pypy > $ cd pypy > $ hg up reflex-support > $ pypy ../../rpython/bin/rpython --Ojit targetpypystandalone --withmod-cppyy > > Thank you! > > > Best regards, > Xia Xin > > > 2013/5/26 : >> Hi, >> >>> Sorry, I did not find any mistakes except that AttributeError. So I >>> think maybe there are some problems in my OS. >> >> >> there may still be another problem lurking around that the AttributeError >> is hiding (and which I'd consider a bug :} ). >> >> Could you try the various cases using ctypes.CDLL(), either with CPython >> or PyPy? >> >> >>> import ctypes >> >>> ctypes.CDLL('libMyClassDict.so') >> >> If there's an issue with the error reporting, then this may show it. >> >> >> Best regards, >> Wim >> -- >> WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Mon May 27 09:14:02 2013 From: arigo at tunes.org (Armin Rigo) Date: Mon, 27 May 2013 09:14:02 +0200 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi Xia, First note that cppyy is included in the standard PyPy 2.0.x releases, so you don't need to translate on the 'reflex-support' branch any more. This branch is now giving you an older version actually. On Mon, May 27, 2013 at 8:20 AM, Xia Xin wrote: > The linux distro I am using is Ubuntu. In the distro, the default > library search path is not controlled by LD_LIBRARY_PATH, but > /etc/ld.so.conf file! Mad. Are you sure? That's strange. Normally that's controlled by both. For me on Ubuntu 12.04 the LD_LIBRARY_PATH variable is correctly handled (see http://bpaste.net/show/102180/ ). A bient?t, Armin. From xiaxinx at gmail.com Mon May 27 09:45:04 2013 From: xiaxinx at gmail.com (Xia Xin) Date: Mon, 27 May 2013 15:45:04 +0800 Subject: [pypy-dev] Problems in connecting C++ and python with cppyy In-Reply-To: References: Message-ID: Hi, Armin Thanks for your reminding. My pypy is now in 2.0.0 beta2 version. I'll recompile it. I'm using Linux Mint 14, which is based on the Ubuntu 12.10. On this distro, Only /etc/ld.so.conf can be used. I tested many times. $ cat /etc/ld.so.conf include /etc/ld.so.conf.d/*.conf /home/GeV/work/hello/v1/src $ sudo ldconfig >>>> ctypes.CDLL('libMyClassDict.so') at 7fcefaeac2f8> I do not know why. Maybe Mint has disabled the LD_LIBRARY_PATH for some reasons or a bug... Thanks for your help. Best regards, Xia Xin 2013/5/27 Armin Rigo : > Hi Xia, > > First note that cppyy is included in the standard PyPy 2.0.x releases, > so you don't need to translate on the 'reflex-support' branch any > more. This branch is now giving you an older version actually. > > On Mon, May 27, 2013 at 8:20 AM, Xia Xin wrote: >> The linux distro I am using is Ubuntu. In the distro, the default >> library search path is not controlled by LD_LIBRARY_PATH, but >> /etc/ld.so.conf file! Mad. > > Are you sure? That's strange. Normally that's controlled by both. > For me on Ubuntu 12.04 the LD_LIBRARY_PATH variable is correctly > handled (see http://bpaste.net/show/102180/ ). > > > A bient?t, > > Armin. From ram at rachum.com Mon May 27 12:39:01 2013 From: ram at rachum.com (Ram Rachum) Date: Mon, 27 May 2013 13:39:01 +0300 Subject: [pypy-dev] Cross-post from python-ideas: Compressing the stack on the fly Message-ID: Hi guys, I made a post on the Python-ideas mailing list that I was told might be relevant to Pypy. I've reproduced the original email below. Here is the thread on Python-ideas with all the discussion. -------------- Hi everybody, Here's an idea I had a while ago. Now, I'm an ignoramus when it comes to how programming languages are implemented, so this idea will most likely be either (a) completely impossible or (b) trivial knowledge. I was thinking about the implementation of the factorial in Python. I was comparing in my mind 2 different solutions: The recursive one, and the one that uses a loop. Here are example implementations for them: def factorial_recursive(n): if n == 1: return 1 return n * factorial_recursive(n - 1) def factorial_loop(n): result = 1 for i in range(1, n + 1): result *= i return result I know that the recursive one is problematic, because it's putting a lot of items on the stack. In fact it's using the stack as if it was a loop variable. The stack wasn't meant to be used like that. Then the question came to me, why? Maybe the stack could be built to handle this kind of (ab)use? I read about tail-call optimization on Wikipedia. If I understand correctly, the gist of it is that the interpreter tries to recognize, on a frame-by-frame basis, which frames could be completely eliminated, and then it eliminates those. Then I read Guido's blog post explaining why he doesn't want it in Python. In that post he outlined 4 different reasons why TCO shouldn't be implemented in Python. But then I thought, maybe you could do something smarter than eliminating individual stack frames. Maybe we could create something that is to the current implementation of the stack what `xrange` is to the old-style `range`. A smart object that allows access to any of a long list of items in it, without actually having to store those items. This would solve the first argument that Guido raises in his post, which I found to be the most substantial one. What I'm saying is: Imagine the stack of the interpreter when it runs the factorial example above for n=1000. It has around 1000 items in it and it's just about to explode. But then, if you'd look at the contents of that stack, you'd see it's embarrassingly regular, a compression algorithm's wet dream. It's just the same code location over and over again, with a different value for `n`. So what I'm suggesting is an algorithm to compress that stack on the fly. An algorithm that would detect regularities in the stack and instead of saving each individual frame, save just the pattern. Then, there wouldn't be any problem with showing informative stack trace: Despite not storing every individual frame, each individual frame could still be accessed, similarly to how `xrange` allow access to each individual member without having to store each of them. Then, the stack could store a lot more items, and tasks that currently require recursion (like pickling using the standard library) will be able to handle much deeper recursions. What do you think? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tigrine.samir at gmail.com Tue May 28 11:19:53 2013 From: tigrine.samir at gmail.com (Samir Tigrine) Date: Tue, 28 May 2013 11:19:53 +0200 Subject: [pypy-dev] pypy Message-ID: Hello now I intend to use pypy It is compatible with zope ? cordially -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartwiegmans at gmail.com Tue May 28 14:12:30 2013 From: bartwiegmans at gmail.com (Bart Wiegmans) Date: Tue, 28 May 2013 14:12:30 +0200 Subject: [pypy-dev] Cross-post from python-ideas: Compressing the stack on the fly Message-ID: Hi Ram, That is a daring idea, really. But since you asked it, I will tell you what I think. I think it is a bad idea. First of all, the complexity of implementation. I see two 'obvious' implementations of your idea, which is a): an 'on-line' stack compressor, which will slow down functions calls farther than they are already (in CPython, anyway), or b): a 'just-in-time' stack compressor that is initiated when the 1000th stack frame is reached. I can imagine this happening in-place, but it won't be efficient. Now consider what happens when an exception is raised from the bottom to the top.Or worse, from the bottom to somewhere-in-the-middle. The second point concerns the possible gains. Suppose your recursive factorial stack is compressed. At the very least, any compression algorithm must store the decreasing integer that is multiplied. (You might get away with detecting the pattern, but don't count on it). At best, you might run your recursive algorithm a bit longer, but it will overflow eventually. In other words, on a machine with a finite stack size, the algorithm is *wrong*. The correct recursive implementation looks like this: def factorial(x): def recursive(x, c): if x <= 1: return c return recursive(x-1, x * c) return recursive(x, 1) Which doesn't need to store the decreasing x on any stack, and is thus a prime candidate for TCO. The third point concerns the reason why python does not have TCO in the first place. I've read Guido's blog as well, and in my opinion, he's wrong to make such a distinction between what are essentially nearly identical processes: jumping to new code locations. As it is, however, he's dictator. In python, you're simply not /supposed/ to use deep recursive algorithms. It is considered un-pythonic. Nevertheless, I like the style of your idea :-). Kind regards, Bart Wiegmans 2013/5/28 : > Send pypy-dev mailing list submissions to > pypy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.python.org/mailman/listinfo/pypy-dev > or, via email, send a message with subject or body 'help' to > pypy-dev-request at python.org > > You can reach the person managing the list at > pypy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of pypy-dev digest..." > > > Today's Topics: > > 1. Cross-post from python-ideas: Compressing the stack on the > fly (Ram Rachum) > 2. pypy (Samir Tigrine) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 27 May 2013 13:39:01 +0300 > From: Ram Rachum > To: pypy-dev at python.org > Subject: [pypy-dev] Cross-post from python-ideas: Compressing the > stack on the fly > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi guys, > > I made a post on the Python-ideas mailing list that I was told might be > relevant to Pypy. I've reproduced the original email below. Here is the > thread on Python-ideas with all the > discussion. > > -------------- > > Hi everybody, > > Here's an idea I had a while ago. Now, I'm an ignoramus when it comes to > how programming languages are implemented, so this idea will most likely be > either (a) completely impossible or (b) trivial knowledge. > > I was thinking about the implementation of the factorial in Python. I was > comparing in my mind 2 different solutions: The recursive one, and the one > that uses a loop. Here are example implementations for them: > > def factorial_recursive(n): > if n == 1: > return 1 > return n * factorial_recursive(n - 1) > > def factorial_loop(n): > result = 1 > for i in range(1, n + 1): > result *= i > return result > > I know that the recursive one is problematic, because it's putting a lot of > items on the stack. In fact it's using the stack as if it was a loop > variable. The stack wasn't meant to be used like that. > > Then the question came to me, why? Maybe the stack could be built to handle > this kind of (ab)use? > > I read about tail-call optimization on Wikipedia. If I understand > correctly, the gist of it is that the interpreter tries to recognize, on a > frame-by-frame basis, which frames could be completely eliminated, and then > it eliminates those. Then I read Guido's blog post explaining why he > doesn't want it in Python. In that post he outlined 4 different reasons why > TCO shouldn't be implemented in Python. > > But then I thought, maybe you could do something smarter than eliminating > individual stack frames. Maybe we could create something that is to the > current implementation of the stack what `xrange` is to the old-style > `range`. A smart object that allows access to any of a long list of items > in it, without actually having to store those items. This would solve the > first argument that Guido raises in his post, which I found to be the most > substantial one. > > What I'm saying is: Imagine the stack of the interpreter when it runs the > factorial example above for n=1000. It has around 1000 items in it and it's > just about to explode. But then, if you'd look at the contents of that > stack, you'd see it's embarrassingly regular, a compression algorithm's wet > dream. It's just the same code location over and over again, with a > different value for `n`. > > So what I'm suggesting is an algorithm to compress that stack on the fly. > An algorithm that would detect regularities in the stack and instead of > saving each individual frame, save just the pattern. Then, there wouldn't > be any problem with showing informative stack trace: Despite not storing > every individual frame, each individual frame could still be accessed, > similarly to how `xrange` allow access to each individual member without > having to store each of them. > > Then, the stack could store a lot more items, and tasks that currently > require recursion (like pickling using the standard library) will be able > to handle much deeper recursions. > > What do you think? > > > Ram. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Tue, 28 May 2013 11:19:53 +0200 > From: Samir Tigrine > To: pypy-dev at python.org > Subject: [pypy-dev] pypy > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hello > > now I intend to use pypy > > It is compatible with zope ? > > cordially > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > > ------------------------------ > > End of pypy-dev Digest, Vol 25, Issue 39 > **************************************** From ram at rachum.com Tue May 28 14:24:19 2013 From: ram at rachum.com (Ram Rachum) Date: Tue, 28 May 2013 15:24:19 +0300 Subject: [pypy-dev] Cross-post from python-ideas: Compressing the stack on the fly In-Reply-To: References: Message-ID: Thanks for your critique, Bart. One counter-point I'd like to make to your third argument, which I had to make over at python-ideas as well: I am *not* advocating recursive programming. There is no need to convince me that the loop version of the algorithm is superior to the recursive version. The reason I care about optimizing recursive algorithms is because sometimes these algorithms are * forced* on you, and when they are, you want them to be as efficient as possible. There's the example of the `pickle` module. In Python, pickling is recursive. I had a case where a program of mine failed to run because it involved pickling an object that referenced a large number of small objects that referenced each other. *I had no choice except using recursion, because Python's `pickle` uses recursion.* (Except of course writing my own pickle module...) So I want recursive algorithm to be faster not because I'd like to use them, but because I want them to be faster when I'm *forced *to use them. On Tue, May 28, 2013 at 3:12 PM, Bart Wiegmans wrote: > Hi Ram, > > That is a daring idea, really. But since you asked it, I will tell you > what I think. > > I think it is a bad idea. > > First of all, the complexity of implementation. I see two 'obvious' > implementations of your idea, which is a): an 'on-line' stack > compressor, which will slow down functions calls farther than they are > already (in CPython, anyway), or b): a 'just-in-time' stack compressor > that is initiated when the 1000th stack frame is reached. I can > imagine this happening in-place, but it won't be efficient. > Now consider what happens when an exception is raised from the bottom > to the top.Or worse, from the bottom to somewhere-in-the-middle. > > The second point concerns the possible gains. Suppose your recursive > factorial stack is compressed. At the very least, any compression > algorithm must store the decreasing integer that is multiplied. (You > might get away with detecting the pattern, but don't count on it). At > best, you might run your recursive algorithm a bit longer, but it will > overflow eventually. In other words, on a machine with a finite stack > size, the algorithm is *wrong*. The correct recursive implementation > looks like this: > > def factorial(x): > def recursive(x, c): > if x <= 1: > return c > return recursive(x-1, x * c) > return recursive(x, 1) > > Which doesn't need to store the decreasing x on any stack, and is > thus a prime candidate for TCO. > > The third point concerns the reason why python does not have TCO in > the first place. I've read Guido's blog as well, and in my opinion, > he's wrong to make such a distinction between what are essentially > nearly identical processes: jumping to new code locations. As it is, > however, he's dictator. In python, you're simply not /supposed/ to use > deep recursive algorithms. It is considered un-pythonic. > > Nevertheless, I like the style of your idea :-). > > Kind regards, > Bart Wiegmans > > > > 2013/5/28 : > > Send pypy-dev mailing list submissions to > > pypy-dev at python.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://mail.python.org/mailman/listinfo/pypy-dev > > or, via email, send a message with subject or body 'help' to > > pypy-dev-request at python.org > > > > You can reach the person managing the list at > > pypy-dev-owner at python.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of pypy-dev digest..." > > > > > > Today's Topics: > > > > 1. Cross-post from python-ideas: Compressing the stack on the > > fly (Ram Rachum) > > 2. pypy (Samir Tigrine) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Mon, 27 May 2013 13:39:01 +0300 > > From: Ram Rachum > > To: pypy-dev at python.org > > Subject: [pypy-dev] Cross-post from python-ideas: Compressing the > > stack on the fly > > Message-ID: > > A at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > Hi guys, > > > > I made a post on the Python-ideas mailing list that I was told might be > > relevant to Pypy. I've reproduced the original email below. Here is the > > thread on Python-ideas with all the > > discussion.< > https://groups.google.com/forum/?fromgroups#!topic/python-ideas/hteGSNTyC_4 > > > > > > -------------- > > > > Hi everybody, > > > > Here's an idea I had a while ago. Now, I'm an ignoramus when it comes to > > how programming languages are implemented, so this idea will most likely > be > > either (a) completely impossible or (b) trivial knowledge. > > > > I was thinking about the implementation of the factorial in Python. I was > > comparing in my mind 2 different solutions: The recursive one, and the > one > > that uses a loop. Here are example implementations for them: > > > > def factorial_recursive(n): > > if n == 1: > > return 1 > > return n * factorial_recursive(n - 1) > > > > def factorial_loop(n): > > result = 1 > > for i in range(1, n + 1): > > result *= i > > return result > > > > I know that the recursive one is problematic, because it's putting a lot > of > > items on the stack. In fact it's using the stack as if it was a loop > > variable. The stack wasn't meant to be used like that. > > > > Then the question came to me, why? Maybe the stack could be built to > handle > > this kind of (ab)use? > > > > I read about tail-call optimization on Wikipedia. If I understand > > correctly, the gist of it is that the interpreter tries to recognize, on > a > > frame-by-frame basis, which frames could be completely eliminated, and > then > > it eliminates those. Then I read Guido's blog post explaining why he > > doesn't want it in Python. In that post he outlined 4 different reasons > why > > TCO shouldn't be implemented in Python. > > > > But then I thought, maybe you could do something smarter than eliminating > > individual stack frames. Maybe we could create something that is to the > > current implementation of the stack what `xrange` is to the old-style > > `range`. A smart object that allows access to any of a long list of items > > in it, without actually having to store those items. This would solve the > > first argument that Guido raises in his post, which I found to be the > most > > substantial one. > > > > What I'm saying is: Imagine the stack of the interpreter when it runs the > > factorial example above for n=1000. It has around 1000 items in it and > it's > > just about to explode. But then, if you'd look at the contents of that > > stack, you'd see it's embarrassingly regular, a compression algorithm's > wet > > dream. It's just the same code location over and over again, with a > > different value for `n`. > > > > So what I'm suggesting is an algorithm to compress that stack on the fly. > > An algorithm that would detect regularities in the stack and instead of > > saving each individual frame, save just the pattern. Then, there wouldn't > > be any problem with showing informative stack trace: Despite not storing > > every individual frame, each individual frame could still be accessed, > > similarly to how `xrange` allow access to each individual member without > > having to store each of them. > > > > Then, the stack could store a lot more items, and tasks that currently > > require recursion (like pickling using the standard library) will be able > > to handle much deeper recursions. > > > > What do you think? > > > > > > Ram. > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > http://mail.python.org/pipermail/pypy-dev/attachments/20130527/5723bc1f/attachment-0001.html > > > > > > ------------------------------ > > > > Message: 2 > > Date: Tue, 28 May 2013 11:19:53 +0200 > > From: Samir Tigrine > > To: pypy-dev at python.org > > Subject: [pypy-dev] pypy > > Message-ID: > > < > CAC3411-qcKUED5YrXQ6P5O7boN46tTR+a3-htQjiDWr3wZhmFg at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > Hello > > > > now I intend to use pypy > > > > It is compatible with zope ? > > > > cordially > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > http://mail.python.org/pipermail/pypy-dev/attachments/20130528/5832754f/attachment-0001.html > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > ------------------------------ > > > > End of pypy-dev Digest, Vol 25, Issue 39 > > **************************************** > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue May 28 14:28:02 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 28 May 2013 14:28:02 +0200 Subject: [pypy-dev] Cross-post from python-ideas: Compressing the stack on the fly In-Reply-To: References: Message-ID: On Tue, May 28, 2013 at 2:24 PM, Ram Rachum wrote: > Thanks for your critique, Bart. > > One counter-point I'd like to make to your third argument, which I had to > make over at python-ideas as well: I am not advocating recursive > programming. There is no need to convince me that the loop version of the > algorithm is superior to the recursive version. The reason I care about > optimizing recursive algorithms is because sometimes these algorithms are > forced on you, and when they are, you want them to be as efficient as > possible. > > There's the example of the `pickle` module. In Python, pickling is > recursive. I had a case where a program of mine failed to run because it > involved pickling an object that referenced a large number of small objects > that referenced each other. I had no choice except using recursion, because > Python's `pickle` uses recursion. (Except of course writing my own pickle > module...) > > So I want recursive algorithm to be faster not because I'd like to use them, > but because I want them to be faster when I'm forced to use them. stackless does allow you to have deep recursion by copying the stack away FYI. > > > On Tue, May 28, 2013 at 3:12 PM, Bart Wiegmans > wrote: >> >> Hi Ram, >> >> That is a daring idea, really. But since you asked it, I will tell you >> what I think. >> >> I think it is a bad idea. >> >> First of all, the complexity of implementation. I see two 'obvious' >> implementations of your idea, which is a): an 'on-line' stack >> compressor, which will slow down functions calls farther than they are >> already (in CPython, anyway), or b): a 'just-in-time' stack compressor >> that is initiated when the 1000th stack frame is reached. I can >> imagine this happening in-place, but it won't be efficient. >> Now consider what happens when an exception is raised from the bottom >> to the top.Or worse, from the bottom to somewhere-in-the-middle. >> >> The second point concerns the possible gains. Suppose your recursive >> factorial stack is compressed. At the very least, any compression >> algorithm must store the decreasing integer that is multiplied. (You >> might get away with detecting the pattern, but don't count on it). At >> best, you might run your recursive algorithm a bit longer, but it will >> overflow eventually. In other words, on a machine with a finite stack >> size, the algorithm is *wrong*. The correct recursive implementation >> looks like this: >> >> def factorial(x): >> def recursive(x, c): >> if x <= 1: >> return c >> return recursive(x-1, x * c) >> return recursive(x, 1) >> >> Which doesn't need to store the decreasing x on any stack, and is >> thus a prime candidate for TCO. >> >> The third point concerns the reason why python does not have TCO in >> the first place. I've read Guido's blog as well, and in my opinion, >> he's wrong to make such a distinction between what are essentially >> nearly identical processes: jumping to new code locations. As it is, >> however, he's dictator. In python, you're simply not /supposed/ to use >> deep recursive algorithms. It is considered un-pythonic. >> >> Nevertheless, I like the style of your idea :-). >> >> Kind regards, >> Bart Wiegmans >> >> >> >> 2013/5/28 : >> > Send pypy-dev mailing list submissions to >> > pypy-dev at python.org >> > >> > To subscribe or unsubscribe via the World Wide Web, visit >> > http://mail.python.org/mailman/listinfo/pypy-dev >> > or, via email, send a message with subject or body 'help' to >> > pypy-dev-request at python.org >> > >> > You can reach the person managing the list at >> > pypy-dev-owner at python.org >> > >> > When replying, please edit your Subject line so it is more specific >> > than "Re: Contents of pypy-dev digest..." >> > >> > >> > Today's Topics: >> > >> > 1. Cross-post from python-ideas: Compressing the stack on the >> > fly (Ram Rachum) >> > 2. pypy (Samir Tigrine) >> > >> > >> > ---------------------------------------------------------------------- >> > >> > Message: 1 >> > Date: Mon, 27 May 2013 13:39:01 +0300 >> > From: Ram Rachum >> > To: pypy-dev at python.org >> > Subject: [pypy-dev] Cross-post from python-ideas: Compressing the >> > stack on the fly >> > Message-ID: >> > >> > >> > Content-Type: text/plain; charset="iso-8859-1" >> > >> > Hi guys, >> > >> > I made a post on the Python-ideas mailing list that I was told might be >> > relevant to Pypy. I've reproduced the original email below. Here is the >> > thread on Python-ideas with all the >> > >> > discussion. >> > >> > -------------- >> > >> > Hi everybody, >> > >> > Here's an idea I had a while ago. Now, I'm an ignoramus when it comes to >> > how programming languages are implemented, so this idea will most likely >> > be >> > either (a) completely impossible or (b) trivial knowledge. >> > >> > I was thinking about the implementation of the factorial in Python. I >> > was >> > comparing in my mind 2 different solutions: The recursive one, and the >> > one >> > that uses a loop. Here are example implementations for them: >> > >> > def factorial_recursive(n): >> > if n == 1: >> > return 1 >> > return n * factorial_recursive(n - 1) >> > >> > def factorial_loop(n): >> > result = 1 >> > for i in range(1, n + 1): >> > result *= i >> > return result >> > >> > I know that the recursive one is problematic, because it's putting a lot >> > of >> > items on the stack. In fact it's using the stack as if it was a loop >> > variable. The stack wasn't meant to be used like that. >> > >> > Then the question came to me, why? Maybe the stack could be built to >> > handle >> > this kind of (ab)use? >> > >> > I read about tail-call optimization on Wikipedia. If I understand >> > correctly, the gist of it is that the interpreter tries to recognize, on >> > a >> > frame-by-frame basis, which frames could be completely eliminated, and >> > then >> > it eliminates those. Then I read Guido's blog post explaining why he >> > doesn't want it in Python. In that post he outlined 4 different reasons >> > why >> > TCO shouldn't be implemented in Python. >> > >> > But then I thought, maybe you could do something smarter than >> > eliminating >> > individual stack frames. Maybe we could create something that is to the >> > current implementation of the stack what `xrange` is to the old-style >> > `range`. A smart object that allows access to any of a long list of >> > items >> > in it, without actually having to store those items. This would solve >> > the >> > first argument that Guido raises in his post, which I found to be the >> > most >> > substantial one. >> > >> > What I'm saying is: Imagine the stack of the interpreter when it runs >> > the >> > factorial example above for n=1000. It has around 1000 items in it and >> > it's >> > just about to explode. But then, if you'd look at the contents of that >> > stack, you'd see it's embarrassingly regular, a compression algorithm's >> > wet >> > dream. It's just the same code location over and over again, with a >> > different value for `n`. >> > >> > So what I'm suggesting is an algorithm to compress that stack on the >> > fly. >> > An algorithm that would detect regularities in the stack and instead of >> > saving each individual frame, save just the pattern. Then, there >> > wouldn't >> > be any problem with showing informative stack trace: Despite not storing >> > every individual frame, each individual frame could still be accessed, >> > similarly to how `xrange` allow access to each individual member without >> > having to store each of them. >> > >> > Then, the stack could store a lot more items, and tasks that currently >> > require recursion (like pickling using the standard library) will be >> > able >> > to handle much deeper recursions. >> > >> > What do you think? >> > >> > >> > Ram. >> > -------------- next part -------------- >> > An HTML attachment was scrubbed... >> > URL: >> > >> > >> > ------------------------------ >> > >> > Message: 2 >> > Date: Tue, 28 May 2013 11:19:53 +0200 >> > From: Samir Tigrine >> > To: pypy-dev at python.org >> > Subject: [pypy-dev] pypy >> > Message-ID: >> > >> > >> > Content-Type: text/plain; charset="iso-8859-1" >> > >> > Hello >> > >> > now I intend to use pypy >> > >> > It is compatible with zope ? >> > >> > cordially >> > -------------- next part -------------- >> > An HTML attachment was scrubbed... >> > URL: >> > >> > >> > ------------------------------ >> > >> > Subject: Digest Footer >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > http://mail.python.org/mailman/listinfo/pypy-dev >> > >> > >> > ------------------------------ >> > >> > End of pypy-dev Digest, Vol 25, Issue 39 >> > **************************************** >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From phyo.arkarlwin at gmail.com Tue May 28 14:49:31 2013 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Tue, 28 May 2013 19:19:31 +0630 Subject: [pypy-dev] Compilation failed at compile_c Message-ID: I am building pypy on sabayon Linux, Python is custom built 2.7.3, all dependencies installed. Build command : ~/workspace/runtime/bin/python ../../rpython/bin/rpython --opt=jit targetpypystandalone.py starting compile_c [platform:execute] make -j 4 in /tmp/usession-release-2.0.x-0/testing_1 [platform:Error] data_pypy_module_cpyext_pyobject.c:114:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:114:3: warning: (near initialization for ?pypy_g_array_972.a.items[0].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:239:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:239:3: warning: (near initialization for ?pypy_g_array_972.a.items[25].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:309:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:309:3: warning: (near initialization for ?pypy_g_array_972.a.items[39].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:339:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:339:3: warning: (near initialization for ?pypy_g_array_972.a.items[45].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:399:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:399:3: warning: (near initialization for ?pypy_g_array_972.a.items[57].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:419:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:419:3: warning: (near initialization for ?pypy_g_array_972.a.items[61].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:439:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:439:3: warning: (near initialization for ?pypy_g_array_972.a.items[65].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:459:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:459:3: warning: (near initialization for ?pypy_g_array_972.a.items[69].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:509:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:509:3: warning: (near initialization for ?pypy_g_array_972.a.items[79].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:519:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:519:3: warning: (near initialization for ?pypy_g_array_972.a.items[81].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:544:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:544:3: warning: (near initialization for ?pypy_g_array_972.a.items[86].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:629:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:629:3: warning: (near initialization for ?pypy_g_array_972.a.items[103].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:709:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:709:3: warning: (near initialization for ?pypy_g_array_972.a.items[119].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:719:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:719:3: warning: (near initialization for ?pypy_g_array_972.a.items[121].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:769:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:769:3: warning: (near initialization for ?pypy_g_array_972.a.items[131].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:774:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:774:3: warning: (near initialization for ?pypy_g_array_972.a.items[132].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:789:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:789:3: warning: (near initialization for ?pypy_g_array_972.a.items[135].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:879:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:879:3: warning: (near initialization for ?pypy_g_array_972.a.items[153].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:944:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:944:3: warning: (near initialization for ?pypy_g_array_972.a.items[166].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1029:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1029:3: warning: (near initialization for ?pypy_g_array_972.a.items[183].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1034:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1034:3: warning: (near initialization for ?pypy_g_array_972.a.items[184].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1069:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1069:3: warning: (near initialization for ?pypy_g_array_972.a.items[191].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1119:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1119:3: warning: (near initialization for ?pypy_g_array_972.a.items[201].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1129:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1129:3: warning: (near initialization for ?pypy_g_array_972.a.items[203].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1149:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1149:3: warning: (near initialization for ?pypy_g_array_972.a.items[207].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1174:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1174:3: warning: (near initialization for ?pypy_g_array_972.a.items[212].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1179:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1179:3: warning: (near initialization for ?pypy_g_array_972.a.items[213].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1224:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1224:3: warning: (near initialization for ?pypy_g_array_972.a.items[222].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1299:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1299:3: warning: (near initialization for ?pypy_g_array_972.a.items[237].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1374:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1374:3: warning: (near initialization for ?pypy_g_array_972.a.items[252].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1389:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1389:3: warning: (near initialization for ?pypy_g_array_972.a.items[255].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1419:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1419:3: warning: (near initialization for ?pypy_g_array_972.a.items[261].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1424:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1424:3: warning: (near initialization for ?pypy_g_array_972.a.items[262].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1429:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1429:3: warning: (near initialization for ?pypy_g_array_972.a.items[263].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1444:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1444:3: warning: (near initialization for ?pypy_g_array_972.a.items[266].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1464:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1464:3: warning: (near initialization for ?pypy_g_array_972.a.items[270].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1469:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1469:3: warning: (near initialization for ?pypy_g_array_972.a.items[271].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1474:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1474:3: warning: (near initialization for ?pypy_g_array_972.a.items[272].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1499:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1499:3: warning: (near initialization for ?pypy_g_array_972.a.items[277].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1519:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1519:3: warning: (near initialization for ?pypy_g_array_972.a.items[281].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1539:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1539:3: warning: (near initialization for ?pypy_g_array_972.a.items[285].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1569:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1569:3: warning: (near initialization for ?pypy_g_array_972.a.items[291].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1579:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1579:3: warning: (near initialization for ?pypy_g_array_972.a.items[293].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1639:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1639:3: warning: (near initialization for ?pypy_g_array_972.a.items[305].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1664:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1664:3: warning: (near initialization for ?pypy_g_array_972.a.items[310].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1699:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1699:3: warning: (near initialization for ?pypy_g_array_972.a.items[317].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1719:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1719:3: warning: (near initialization for ?pypy_g_array_972.a.items[321].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1729:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1729:3: warning: (near initialization for ?pypy_g_array_972.a.items[323].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1739:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1739:3: warning: (near initialization for ?pypy_g_array_972.a.items[325].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1819:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1819:3: warning: (near initialization for ?pypy_g_array_972.a.items[341].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1824:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1824:3: warning: (near initialization for ?pypy_g_array_972.a.items[342].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1829:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1829:3: warning: (near initialization for ?pypy_g_array_972.a.items[343].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1879:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1879:3: warning: (near initialization for ?pypy_g_array_972.a.items[353].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1899:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1899:3: warning: (near initialization for ?pypy_g_array_972.a.items[357].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1909:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1909:3: warning: (near initialization for ?pypy_g_array_972.a.items[359].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1959:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1959:3: warning: (near initialization for ?pypy_g_array_972.a.items[369].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1969:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1969:3: warning: (near initialization for ?pypy_g_array_972.a.items[371].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1974:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1974:3: warning: (near initialization for ?pypy_g_array_972.a.items[372].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1989:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:1989:3: warning: (near initialization for ?pypy_g_array_972.a.items[375].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2014:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2014:3: warning: (near initialization for ?pypy_g_array_972.a.items[380].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2039:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2039:3: warning: (near initialization for ?pypy_g_array_972.a.items[385].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2119:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2119:3: warning: (near initialization for ?pypy_g_array_972.a.items[401].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2139:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2139:3: warning: (near initialization for ?pypy_g_array_972.a.items[405].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2219:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2219:3: warning: (near initialization for ?pypy_g_array_972.a.items[421].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2339:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2339:3: warning: (near initialization for ?pypy_g_array_972.a.items[445].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2359:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2359:3: warning: (near initialization for ?pypy_g_array_972.a.items[449].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2379:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2379:3: warning: (near initialization for ?pypy_g_array_972.a.items[453].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2439:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2439:3: warning: (near initialization for ?pypy_g_array_972.a.items[465].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2459:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2459:3: warning: (near initialization for ?pypy_g_array_972.a.items[469].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2519:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2519:3: warning: (near initialization for ?pypy_g_array_972.a.items[481].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2564:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2564:3: warning: (near initialization for ?pypy_g_array_972.a.items[490].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2594:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2594:3: warning: (near initialization for ?pypy_g_array_972.a.items[496].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2604:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2604:3: warning: (near initialization for ?pypy_g_array_972.a.items[498].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2609:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2609:3: warning: (near initialization for ?pypy_g_array_972.a.items[499].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2629:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2629:3: warning: (near initialization for ?pypy_g_array_972.a.items[503].d_value?) [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2639:3: warning: initialization from incompatible pointer type [enabled by default] [platform:Error] data_pypy_module_cpyext_pyobject.c:2639:3: warning: (near initialization for ?pypy_g_array_972.a.items[505].d_value?) [enabled by default] [platform:Error] pypy_module__multibytecodec_c_codecs.c:1188:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module__multibytecodec_c_codecs.c:1290:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module__ssl_interp_ssl.c: In function ?pypy_g__get_peer_alt_names?: [platform:Error] pypy_module__ssl_interp_ssl.c:11413:12: warning: pointer targets in assignment differ in signedness [-Wpointer-sign] [platform:Error] pypy_module__ssl_interp_ssl.c:11420:12: warning: assignment from incompatible pointer type [enabled by default] [platform:Error] pypy_module__ssl_interp_ssl.c:11429:12: warning: assignment from incompatible pointer type [enabled by default] [platform:Error] pypy_module__ssl_interp_ssl.c:12245:12: warning: assignment from incompatible pointer type [enabled by default] [platform:Error] pypy_module__warnings_interp_warnings.c: In function ?pypy_g_normalize_module?: [platform:Error] pypy_module__warnings_interp_warnings.c:10020:5: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] [platform:Error] pypy_module_cppyy_interp_cppyy.c: In function ?pypy_g_CPPSetItem_call?: [platform:Error] pypy_module_cppyy_interp_cppyy.c:10734:5: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] [platform:Error] pypy_module_cpyext_api.c: In function ?PyBuffer_IsContiguous?: [platform:Error] pypy_module_cpyext_api.c:35513:15: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] [platform:Error] pypy_module_cpyext_methodobject.c: In function ?pypy_g_Py_FindMethod?: [platform:Error] pypy_module_cpyext_methodobject.c:680:14: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c:703:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c:831:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c:987:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyDescr_NewMethod?: [platform:Error] pypy_module_cpyext_methodobject.c:1499:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyDescr_NewClassMethod?: [platform:Error] pypy_module_cpyext_methodobject.c:1752:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_modsupport.c: In function ?pypy_g_convert_method_defs?: [platform:Error] pypy_module_cpyext_modsupport.c:3259:14: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_modsupport.c:3268:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyCFunction_NewEx?: [platform:Error] pypy_module_cpyext_methodobject.c:1977:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_modsupport.c:4669:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_methodobject.c: In function ?pypy_g_W_PyCFunctionObject_get_doc?: [platform:Error] pypy_module_cpyext_methodobject.c:9630:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_pystate.c: In function ?pypy_g_PyThreadState_New?: [platform:Error] pypy_module_cpyext_pystate.c:566:2: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] pypy_module_cpyext_typeobject.c: In function ?pypy_g_W_PyCTypeObject___init__?: [platform:Error] pypy_module_cpyext_typeobject.c:3797:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_typeobject.c:4014:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_cpyext_typeobject.c:4026:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_rctime_interp_time.c: In function ?pypy_g__init_timezone?: [platform:Error] pypy_module_rctime_interp_time.c:362:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_rctime_interp_time.c:493:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [platform:Error] pypy_module_thread_os_thread.c: In function ?pypy_g_bootstrap?: [platform:Error] pypy_module_thread_os_thread.c:3166:2: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_thread_start?: [platform:Error] rpython_memory_gctransform_asmgcroot.c:18:2: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_locate_caller_based_on_retaddr?: [platform:Error] rpython_memory_gctransform_asmgcroot.c:662:2: warning: passing argument 4 of ?qsort? from incompatible pointer type [enabled by default] [platform:Error] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:55:0, [platform:Error] from common_header.h:38, [platform:Error] from rpython_memory_gctransform_asmgcroot.c:5: [platform:Error] /usr/include/stdlib.h:761:13: note: expected ?__compar_fn_t? but argument is of type ?int (*)(void *, void *)? [platform:Error] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_belongs_to_current_thread?: [platform:Error] rpython_memory_gctransform_asmgcroot.c:1130:2: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_memory_gctransform_framework.c: In function ?pypy_g_setup_root_walker?: [platform:Error] rpython_memory_gctransform_framework.c:73:2: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_rlib__stacklet_asmgcc.c: In function ?pypy_g_StackletGcRootFinder_switch?: [platform:Error] rpython_rlib__stacklet_asmgcc.c:64:13: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_rlib__stacklet_asmgcc.c: In function ?pypy_g_StackletGcRootFinder_new?: [platform:Error] rpython_rlib__stacklet_asmgcc.c:172:13: warning: assignment makes pointer from integer without a cast [enabled by default] [platform:Error] rpython_rlib_rsocket.c: In function ?pypy_g_PacketAddress_get_addr?: [platform:Error] rpython_rlib_rsocket.c:9050:13: warning: pointer targets in assignment differ in signedness [-Wpointer-sign] [platform:Error] rpython_rtyper_lltypesystem_rffi.c: In function ?pypy_g__PyPy_dg_dtoa__Float_Signed_Signed_arrayPtr_arra?: [platform:Error] rpython_rtyper_lltypesystem_rffi.c:2070:2: warning: passing argument 4 of ?_PyPy_dg_dtoa? from incompatible pointer type [enabled by default] [platform:Error] In file included from common_header.h:61:0, [platform:Error] from rpython_rtyper_lltypesystem_rffi.c:5: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.h:4:8: note: expected ?Signed *? but argument is of type ?int *? [platform:Error] rpython_rtyper_lltypesystem_rffi.c:2070:2: warning: passing argument 5 of ?_PyPy_dg_dtoa? from incompatible pointer type [enabled by default] [platform:Error] In file included from common_header.h:61:0, [platform:Error] from rpython_rtyper_lltypesystem_rffi.c:5: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.h:4:8: note: expected ?Signed *? but argument is of type ?int *? [platform:Error] structseq.c: In function ?structseq_slice?: [platform:Error] structseq.c:89:9: warning: passing argument 1 of ?PyTuple_SetItem? from incompatible pointer type [enabled by default] [platform:Error] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:132:0, [platform:Error] from structseq.c:4: [platform:Error] ../pypy_decl.h:403:17: note: expected ?struct PyObject *? but argument is of type ?struct PyTupleObject *? [platform:Error] In file included from ../module_cache/module_4.c:165:0: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.c:131:0: warning: "PyMem_Malloc" redefined [enabled by default] [platform:Error] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:117:0, [platform:Error] from ../module_cache/module_4.c:34: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/pymem.h:8:0: note: this is the location of the previous definition [platform:Error] In file included from ../module_cache/module_4.c:165:0: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.c:132:0: warning: "PyMem_Free" redefined [enabled by default] [platform:Error] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:117:0, [platform:Error] from ../module_cache/module_4.c:34: [platform:Error] /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/pymem.h:9:0: note: this is the location of the previous definition [platform:Error] Traceback (most recent call last): [platform:Error] Traceback (most recent call last): [platform:Error] Traceback (most recent call last): [platform:Error] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [platform:Error] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [platform:Error] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [platform:Error] Traceback (most recent call last): [platform:Error] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [platform:Error] import re, sys, os, random [platform:Error] import re, sys, os, random [platform:Error] import re, sys, os, random [platform:Error] File "/usr/lib/python2.7/random.py", line 45, in [platform:Error] File "/usr/lib/python2.7/random.py", line 45, in [platform:Error] File "/usr/lib/python2.7/random.py", line 45, in [platform:Error] import re, sys, os, random [platform:Error] File "/usr/lib/python2.7/random.py", line 45, in [platform:Error] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [platform:Error] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [platform:Error] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [platform:Error] ImportError: from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [platform:Error] ImportErrorImportError: : /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [platform:Error] /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf/usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [platform:Error] [platform:Error] ImportError: /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [platform:Error] make: *** [testing_1.gcmap] Error 1 [platform:Error] make: *** Waiting for unfinished jobs.... [platform:Error] make: *** [data_pypy_goal_targetpypystandalone.gcmap] Error 1 [platform:Error] make: *** [data_pypy_goal_targetpypystandalone_1.gcmap] Error 1 [platform:Error] make: *** [data_pypy_interpreter_argument.gcmap] Error 1 [865db] translation-task} [Timer] Timings: [Timer] annotate --- 774.4 s [Timer] rtype_lltype --- 1966.4 s [Timer] pyjitpl_lltype --- 1492.1 s [Timer] backendopt_lltype --- 218.0 s [Timer] stackcheckinsertion_lltype --- 153.1 s [Timer] database_c --- 312.5 s [Timer] source_c --- 594.5 s [Timer] compile_c --- 423.2 s [Timer] =========================================== [Timer] Total: --- 5934.2 s [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/goal/translate.py", line 321, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/driver.py", line 733, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/tool/taskengine.py", line 114, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/driver.py", line 284, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/driver.py", line 528, in task_compile_c [translation:ERROR] cbuilder.compile(**kwds) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/genc.py", line 366, in compile [translation:ERROR] extra_opts) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/platform/posix.py", line 194, in execute_makefile [translation:ERROR] self._handle_error(returncode, stdout, stderr, path.join('make')) [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/platform/__init__.py", line 150, in _handle_error [translation:ERROR] raise CompilationError(stdout, stderr) [translation:ERROR] CompilationError: CompilationError(err=""" [translation:ERROR] data_pypy_module_cpyext_pyobject.c:114:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:114:3: warning: (near initialization for ?pypy_g_array_972.a.items[0].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:239:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:239:3: warning: (near initialization for ?pypy_g_array_972.a.items[25].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:309:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:309:3: warning: (near initialization for ?pypy_g_array_972.a.items[39].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:339:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:339:3: warning: (near initialization for ?pypy_g_array_972.a.items[45].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:399:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:399:3: warning: (near initialization for ?pypy_g_array_972.a.items[57].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:419:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:419:3: warning: (near initialization for ?pypy_g_array_972.a.items[61].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:439:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:439:3: warning: (near initialization for ?pypy_g_array_972.a.items[65].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:459:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:459:3: warning: (near initialization for ?pypy_g_array_972.a.items[69].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:509:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:509:3: warning: (near initialization for ?pypy_g_array_972.a.items[79].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:519:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:519:3: warning: (near initialization for ?pypy_g_array_972.a.items[81].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:544:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:544:3: warning: (near initialization for ?pypy_g_array_972.a.items[86].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:629:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:629:3: warning: (near initialization for ?pypy_g_array_972.a.items[103].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:709:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:709:3: warning: (near initialization for ?pypy_g_array_972.a.items[119].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:719:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:719:3: warning: (near initialization for ?pypy_g_array_972.a.items[121].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:769:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:769:3: warning: (near initialization for ?pypy_g_array_972.a.items[131].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:774:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:774:3: warning: (near initialization for ?pypy_g_array_972.a.items[132].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:789:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:789:3: warning: (near initialization for ?pypy_g_array_972.a.items[135].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:879:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:879:3: warning: (near initialization for ?pypy_g_array_972.a.items[153].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:944:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:944:3: warning: (near initialization for ?pypy_g_array_972.a.items[166].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1029:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1029:3: warning: (near initialization for ?pypy_g_array_972.a.items[183].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1034:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1034:3: warning: (near initialization for ?pypy_g_array_972.a.items[184].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1069:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1069:3: warning: (near initialization for ?pypy_g_array_972.a.items[191].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1119:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1119:3: warning: (near initialization for ?pypy_g_array_972.a.items[201].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1129:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1129:3: warning: (near initialization for ?pypy_g_array_972.a.items[203].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1149:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1149:3: warning: (near initialization for ?pypy_g_array_972.a.items[207].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1174:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1174:3: warning: (near initialization for ?pypy_g_array_972.a.items[212].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1179:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1179:3: warning: (near initialization for ?pypy_g_array_972.a.items[213].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1224:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1224:3: warning: (near initialization for ?pypy_g_array_972.a.items[222].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1299:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1299:3: warning: (near initialization for ?pypy_g_array_972.a.items[237].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1374:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1374:3: warning: (near initialization for ?pypy_g_array_972.a.items[252].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1389:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1389:3: warning: (near initialization for ?pypy_g_array_972.a.items[255].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1419:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1419:3: warning: (near initialization for ?pypy_g_array_972.a.items[261].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1424:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1424:3: warning: (near initialization for ?pypy_g_array_972.a.items[262].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1429:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1429:3: warning: (near initialization for ?pypy_g_array_972.a.items[263].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1444:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1444:3: warning: (near initialization for ?pypy_g_array_972.a.items[266].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1464:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1464:3: warning: (near initialization for ?pypy_g_array_972.a.items[270].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1469:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1469:3: warning: (near initialization for ?pypy_g_array_972.a.items[271].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1474:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1474:3: warning: (near initialization for ?pypy_g_array_972.a.items[272].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1499:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1499:3: warning: (near initialization for ?pypy_g_array_972.a.items[277].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1519:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1519:3: warning: (near initialization for ?pypy_g_array_972.a.items[281].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1539:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1539:3: warning: (near initialization for ?pypy_g_array_972.a.items[285].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1569:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1569:3: warning: (near initialization for ?pypy_g_array_972.a.items[291].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1579:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1579:3: warning: (near initialization for ?pypy_g_array_972.a.items[293].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1639:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1639:3: warning: (near initialization for ?pypy_g_array_972.a.items[305].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1664:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1664:3: warning: (near initialization for ?pypy_g_array_972.a.items[310].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1699:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1699:3: warning: (near initialization for ?pypy_g_array_972.a.items[317].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1719:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1719:3: warning: (near initialization for ?pypy_g_array_972.a.items[321].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1729:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1729:3: warning: (near initialization for ?pypy_g_array_972.a.items[323].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1739:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1739:3: warning: (near initialization for ?pypy_g_array_972.a.items[325].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1819:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1819:3: warning: (near initialization for ?pypy_g_array_972.a.items[341].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1824:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1824:3: warning: (near initialization for ?pypy_g_array_972.a.items[342].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1829:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1829:3: warning: (near initialization for ?pypy_g_array_972.a.items[343].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1879:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1879:3: warning: (near initialization for ?pypy_g_array_972.a.items[353].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1899:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1899:3: warning: (near initialization for ?pypy_g_array_972.a.items[357].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1909:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1909:3: warning: (near initialization for ?pypy_g_array_972.a.items[359].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1959:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1959:3: warning: (near initialization for ?pypy_g_array_972.a.items[369].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1969:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1969:3: warning: (near initialization for ?pypy_g_array_972.a.items[371].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1974:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1974:3: warning: (near initialization for ?pypy_g_array_972.a.items[372].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1989:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:1989:3: warning: (near initialization for ?pypy_g_array_972.a.items[375].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2014:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2014:3: warning: (near initialization for ?pypy_g_array_972.a.items[380].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2039:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2039:3: warning: (near initialization for ?pypy_g_array_972.a.items[385].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2119:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2119:3: warning: (near initialization for ?pypy_g_array_972.a.items[401].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2139:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2139:3: warning: (near initialization for ?pypy_g_array_972.a.items[405].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2219:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2219:3: warning: (near initialization for ?pypy_g_array_972.a.items[421].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2339:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2339:3: warning: (near initialization for ?pypy_g_array_972.a.items[445].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2359:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2359:3: warning: (near initialization for ?pypy_g_array_972.a.items[449].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2379:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2379:3: warning: (near initialization for ?pypy_g_array_972.a.items[453].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2439:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2439:3: warning: (near initialization for ?pypy_g_array_972.a.items[465].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2459:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2459:3: warning: (near initialization for ?pypy_g_array_972.a.items[469].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2519:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2519:3: warning: (near initialization for ?pypy_g_array_972.a.items[481].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2564:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2564:3: warning: (near initialization for ?pypy_g_array_972.a.items[490].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2594:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2594:3: warning: (near initialization for ?pypy_g_array_972.a.items[496].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2604:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2604:3: warning: (near initialization for ?pypy_g_array_972.a.items[498].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2609:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2609:3: warning: (near initialization for ?pypy_g_array_972.a.items[499].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2629:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2629:3: warning: (near initialization for ?pypy_g_array_972.a.items[503].d_value?) [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2639:3: warning: initialization from incompatible pointer type [enabled by default] [translation:ERROR] data_pypy_module_cpyext_pyobject.c:2639:3: warning: (near initialization for ?pypy_g_array_972.a.items[505].d_value?) [enabled by default] [translation:ERROR] implement_4.c: In function ?pypy_g_descr_typecheck_get_doc?: [translation:ERROR] implement_4.c:61335:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_9.c: In function ?pypy_g_ccall_fclose__arrayPtr_reload?: [translation:ERROR] implement_9.c:39034:2: warning: passing argument 1 of ?fclose? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/object.h:4:0, [translation:ERROR] from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:85, [translation:ERROR] from common_header.h:38, [translation:ERROR] from implement_9.c:5: [translation:ERROR] /usr/include/stdio.h:238:12: note: expected ?struct FILE *? but argument is of type ?char *? [translation:ERROR] implement_13.c: In function ?pypy_g_ccall_stat64__arrayPtr_statPtr_reload?: [translation:ERROR] implement_13.c:39915:2: warning: passing argument 2 of ?stat64? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:55:0, [translation:ERROR] from implement_13.c:5: [translation:ERROR] /usr/include/sys/stat.h:504:1: note: expected ?struct stat64 *? but argument is of type ?struct stat *? [translation:ERROR] implement_13.c: In function ?pypy_g_ccall_fstat64__INT_statPtr_reload?: [translation:ERROR] implement_13.c:40155:2: warning: passing argument 2 of ?fstat64? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:55:0, [translation:ERROR] from implement_13.c:5: [translation:ERROR] /usr/include/sys/stat.h:518:1: note: expected ?struct stat64 *? but argument is of type ?struct stat *? [translation:ERROR] implement_14.c: In function ?pypy_g_ccall_fdopen__INT_arrayPtr_reload?: [translation:ERROR] implement_14.c:58167:11: warning: assignment from incompatible pointer type [enabled by default] [translation:ERROR] implement_14.c: In function ?pypy_g_ccall_setbuf__arrayPtr_arrayPtr_reload?: [translation:ERROR] implement_14.c:58224:2: warning: passing argument 1 of ?setbuf? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/object.h:4:0, [translation:ERROR] from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:85, [translation:ERROR] from common_header.h:38, [translation:ERROR] from implement_14.c:5: [translation:ERROR] /usr/include/stdio.h:333:13: note: expected ?struct FILE * __restrict__? but argument is of type ?char *? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetUnparsedEntityDeclHandler__NonePtr__1?: [translation:ERROR] implement_23.c:44785:2: warning: passing argument 2 of ?XML_SetUnparsedEntityDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:600:1: note: expected ?XML_UnparsedEntityDeclHandler? but argument is of type ?void (*)(void *, char *, char *, char *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetStartElementHandler__NonePtr_funcPt_1?: [translation:ERROR] implement_23.c:44961:2: warning: passing argument 2 of ?XML_SetStartElementHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:539:1: note: expected ?XML_StartElementHandler? but argument is of type ?void (*)(void *, char *, char **)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetSkippedEntityHandler__NonePtr_funcP_1?: [translation:ERROR] implement_23.c:45086:2: warning: passing argument 2 of ?XML_SetSkippedEntityHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:637:1: note: expected ?XML_SkippedEntityHandler? but argument is of type ?void (*)(void *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetEndElementHandler__NonePtr_funcPtr_?: [translation:ERROR] implement_23.c:45156:2: warning: passing argument 2 of ?XML_SetEndElementHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:543:1: note: expected ?XML_EndElementHandler? but argument is of type ?void (*)(void *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetCommentHandler__NonePtr_funcPtr_rel?: [translation:ERROR] implement_23.c:45350:2: warning: passing argument 2 of ?XML_SetCommentHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:554:1: note: expected ?XML_CommentHandler? but argument is of type ?void (*)(void *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetElementDeclHandler__NonePtr_funcPtr_1?: [translation:ERROR] implement_23.c:45420:2: warning: passing argument 2 of ?XML_SetElementDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:155:1: note: expected ?XML_ElementDeclHandler? but argument is of type ?void (*)(void *, char *, struct XML_Content *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetEndNamespaceDeclHandler__NonePtr_fu_1?: [translation:ERROR] implement_23.c:45669:2: warning: passing argument 2 of ?XML_SetEndNamespaceDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:617:1: note: expected ?XML_EndNamespaceDeclHandler? but argument is of type ?void (*)(void *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetStartNamespaceDeclHandler__NonePtr__1?: [translation:ERROR] implement_23.c:45739:2: warning: passing argument 2 of ?XML_SetStartNamespaceDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:613:1: note: expected ?XML_StartNamespaceDeclHandler? but argument is of type ?void (*)(void *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetEntityDeclHandler__NonePtr_funcPtr_?: [translation:ERROR] implement_23.c:45810:2: warning: passing argument 2 of ?XML_SetEntityDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:341:1: note: expected ?XML_EntityDeclHandler? but argument is of type ?void (*)(void *, char *, int, char *, int, char *, char *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetNotationDeclHandler__NonePtr_funcPt_1?: [translation:ERROR] implement_23.c:45881:2: warning: passing argument 2 of ?XML_SetNotationDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:604:1: note: expected ?XML_NotationDeclHandler? but argument is of type ?void (*)(void *, char *, char *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetStartDoctypeDeclHandler__NonePtr_fu_1?: [translation:ERROR] implement_23.c:45952:2: warning: passing argument 2 of ?XML_SetStartDoctypeDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:592:1: note: expected ?XML_StartDoctypeDeclHandler? but argument is of type ?void (*)(void *, char *, char *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetDefaultHandlerExpand__NonePtr_funcP_1?: [translation:ERROR] implement_23.c:46022:2: warning: passing argument 2 of ?XML_SetDefaultHandlerExpand? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:583:1: note: expected ?XML_DefaultHandler? but argument is of type ?void (*)(void *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetDefaultHandler__NonePtr_funcPtr_rel?: [translation:ERROR] implement_23.c:46092:2: warning: passing argument 2 of ?XML_SetDefaultHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:575:1: note: expected ?XML_DefaultHandler? but argument is of type ?void (*)(void *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetExternalEntityRefHandler__NonePtr_f_1?: [translation:ERROR] implement_23.c:46163:2: warning: passing argument 2 of ?XML_SetExternalEntityRefHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:625:1: note: expected ?XML_ExternalEntityRefHandler? but argument is of type ?int (*)(struct XML_ParserStruct *, char *, char *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetProcessingInstructionHandler__NoneP_1?: [translation:ERROR] implement_23.c:46234:2: warning: passing argument 2 of ?XML_SetProcessingInstructionHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:551:1: note: expected ?XML_ProcessingInstructionHandler? but argument is of type ?void (*)(void *, char *, char *)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetXmlDeclHandler__NonePtr_funcPtr_rel?: [translation:ERROR] implement_23.c:46360:2: warning: passing argument 2 of ?XML_SetXmlDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:192:1: note: expected ?XML_XmlDeclHandler? but argument is of type ?void (*)(void *, char *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetAttlistDeclHandler__NonePtr_funcPtr_1?: [translation:ERROR] implement_23.c:46431:2: warning: passing argument 2 of ?XML_SetAttlistDeclHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:175:1: note: expected ?XML_AttlistDeclHandler? but argument is of type ?void (*)(void *, char *, char *, char *, char *, int)? [translation:ERROR] implement_23.c: In function ?pypy_g_ccall_XML_SetCharacterDataHandler__NonePtr_funcP_1?: [translation:ERROR] implement_23.c:46571:2: warning: passing argument 2 of ?XML_SetCharacterDataHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_23.c:5: [translation:ERROR] /usr/include/expat.h:547:1: note: expected ?XML_CharacterDataHandler? but argument is of type ?void (*)(void *, char *, int)? [translation:ERROR] implement_24.c: In function ?pypy_g_ccall_EVP_get_digestbyname__arrayPtr_reload?: [translation:ERROR] implement_24.c:61838:11: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_25.c: In function ?pypy_g_ccall_XML_ErrorString__INT_reload?: [translation:ERROR] implement_25.c:1999:11: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_25.c: In function ?pypy_g_ccall_XML_SetUnknownEncodingHandler__NonePtr_fun_1?: [translation:ERROR] implement_25.c:4110:2: warning: passing argument 2 of ?XML_SetUnknownEncodingHandler? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:80:0, [translation:ERROR] from implement_25.c:5: [translation:ERROR] /usr/include/expat.h:641:1: note: expected ?XML_UnknownEncodingHandler? but argument is of type ?int (*)(void *, char *, struct XML_Encoding *)? [translation:ERROR] implement_25.c: In function ?pypy_g_ccall_lstat64__arrayPtr_statPtr_reload?: [translation:ERROR] implement_25.c:5315:2: warning: passing argument 2 of ?lstat64? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:55:0, [translation:ERROR] from implement_25.c:5: [translation:ERROR] /usr/include/sys/stat.h:511:1: note: expected ?struct stat64 *? but argument is of type ?struct stat *? [translation:ERROR] implement_25.c: In function ?pypy_g_ccall_forkpty__arrayPtr_arrayPtr_arrayPtr_arrayP_1?: [translation:ERROR] implement_25.c:44950:2: warning: implicit declaration of function ?forkpty? [-Wimplicit-function-declaration] [translation:ERROR] implement_25.c: In function ?pypy_g_ccall_openpty__arrayPtr_arrayPtr_arrayPtr_arrayP_1?: [translation:ERROR] implement_25.c:45249:2: warning: implicit declaration of function ?openpty? [-Wimplicit-function-declaration] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_inet_ntop__INT_arrayPtr_arrayPtr_UINT_relo?: [translation:ERROR] implement_26.c:2518:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_EVP_DigestFinal__EVP_MD_CTXPtr_arrayPtr_ar_1?: [translation:ERROR] implement_26.c:6052:2: warning: pointer targets in passing argument 2 of ?EVP_DigestFinal? differ in signedness [-Wpointer-sign] [translation:ERROR] In file included from /usr/include/openssl/x509.h:73:0, [translation:ERROR] from /usr/include/openssl/ssl.h:156, [translation:ERROR] from common_header.h:70, [translation:ERROR] from implement_26.c:5: [translation:ERROR] /usr/include/openssl/evp.h:566:5: note: expected ?unsigned char *? but argument is of type ?char *? [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_SSL_get_current_cipher__SSLPtr_reload?: [translation:ERROR] implement_26.c:16549:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_SSL_CIPHER_get_name__SSL_CIPHERPtr_reload?: [translation:ERROR] implement_26.c:16604:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_TLSv1_method____reload?: [translation:ERROR] implement_26.c:18482:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_SSLv3_method____reload?: [translation:ERROR] implement_26.c:19382:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_SSLv2_method____reload?: [translation:ERROR] implement_26.c:19435:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_SSLv23_method____reload?: [translation:ERROR] implement_26.c:19488:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_i2d_X509__X509Ptr_arrayPtr_reload?: [translation:ERROR] implement_26.c:21459:2: warning: passing argument 2 of ?i2d_X509? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /usr/include/openssl/ssl.h:156:0, [translation:ERROR] from common_header.h:70, [translation:ERROR] from implement_26.c:5: [translation:ERROR] /usr/include/openssl/x509.h:839:1: note: expected ?unsigned char **? but argument is of type ?char **? [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_X509V3_EXT_get__arrayPtr_reload?: [translation:ERROR] implement_26.c:24247:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_ASN1_ITEM_ptr__funcPtr_reload?: [translation:ERROR] implement_26.c:24527:12: warning: assignment from incompatible pointer type [enabled by default] [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_ASN1_item_d2i__arrayPtr_arrayPtr_Signed_AS_1?: [translation:ERROR] implement_26.c:24589:2: warning: passing argument 2 of ?ASN1_item_d2i? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /usr/include/openssl/objects.h:960:0, [translation:ERROR] from /usr/include/openssl/evp.h:94, [translation:ERROR] from /usr/include/openssl/x509.h:73, [translation:ERROR] from /usr/include/openssl/ssl.h:156, [translation:ERROR] from common_header.h:70, [translation:ERROR] from implement_26.c:5: [translation:ERROR] /usr/include/openssl/asn1.h:1088:14: note: expected ?const unsigned char **? but argument is of type ?char **? [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_ASN1_STRING_to_UTF8__arrayPtr_asn1_string__1?: [translation:ERROR] implement_26.c:24721:2: warning: passing argument 1 of ?ASN1_STRING_to_UTF8? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /usr/include/openssl/objects.h:960:0, [translation:ERROR] from /usr/include/openssl/evp.h:94, [translation:ERROR] from /usr/include/openssl/x509.h:73, [translation:ERROR] from /usr/include/openssl/ssl.h:156, [translation:ERROR] from common_header.h:70, [translation:ERROR] from implement_26.c:5: [translation:ERROR] /usr/include/openssl/asn1.h:1000:5: note: expected ?unsigned char **? but argument is of type ?char **? [translation:ERROR] implement_26.c: In function ?pypy_g_ccall_gai_strerror__INT_reload?: [translation:ERROR] implement_26.c:25693:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module__multibytecodec_c_codecs.c: In function ?pypy_g_multibytecodec_encerror?: [translation:ERROR] pypy_module__multibytecodec_c_codecs.c:1188:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module__multibytecodec_c_codecs.c:1290:12: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module__ssl_interp_ssl.c: In function ?pypy_g__get_peer_alt_names?: [translation:ERROR] pypy_module__ssl_interp_ssl.c:11413:12: warning: pointer targets in assignment differ in signedness [-Wpointer-sign] [translation:ERROR] pypy_module__ssl_interp_ssl.c:11420:12: warning: assignment from incompatible pointer type [enabled by default] [translation:ERROR] pypy_module__ssl_interp_ssl.c:11429:12: warning: assignment from incompatible pointer type [enabled by default] [translation:ERROR] pypy_module__ssl_interp_ssl.c:12245:12: warning: assignment from incompatible pointer type [enabled by default] [translation:ERROR] pypy_module__warnings_interp_warnings.c: In function ?pypy_g_normalize_module?: [translation:ERROR] pypy_module__warnings_interp_warnings.c:10020:5: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] [translation:ERROR] pypy_module_cppyy_interp_cppyy.c: In function ?pypy_g_CPPSetItem_call?: [translation:ERROR] pypy_module_cppyy_interp_cppyy.c:10734:5: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] [translation:ERROR] pypy_module_cpyext_api.c: In function ?PyBuffer_IsContiguous?: [translation:ERROR] pypy_module_cpyext_api.c:35513:15: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] [translation:ERROR] pypy_module_cpyext_methodobject.c: In function ?pypy_g_Py_FindMethod?: [translation:ERROR] pypy_module_cpyext_methodobject.c:680:14: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c:703:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c:831:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c:987:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyDescr_NewMethod?: [translation:ERROR] pypy_module_cpyext_methodobject.c:1499:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyDescr_NewClassMethod?: [translation:ERROR] pypy_module_cpyext_methodobject.c:1752:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_modsupport.c: In function ?pypy_g_convert_method_defs?: [translation:ERROR] pypy_module_cpyext_modsupport.c:3259:14: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_modsupport.c:3268:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c: In function ?pypy_g_PyCFunction_NewEx?: [translation:ERROR] pypy_module_cpyext_methodobject.c:1977:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_modsupport.c:4669:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_methodobject.c: In function ?pypy_g_W_PyCFunctionObject_get_doc?: [translation:ERROR] pypy_module_cpyext_methodobject.c:9630:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_pystate.c: In function ?pypy_g_PyThreadState_New?: [translation:ERROR] pypy_module_cpyext_pystate.c:566:2: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] pypy_module_cpyext_typeobject.c: In function ?pypy_g_W_PyCTypeObject___init__?: [translation:ERROR] pypy_module_cpyext_typeobject.c:3797:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_typeobject.c:4014:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_cpyext_typeobject.c:4026:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_rctime_interp_time.c: In function ?pypy_g__init_timezone?: [translation:ERROR] pypy_module_rctime_interp_time.c:362:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_rctime_interp_time.c:493:13: warning: assignment discards ?const? qualifier from pointer target type [enabled by default] [translation:ERROR] pypy_module_thread_os_thread.c: In function ?pypy_g_bootstrap?: [translation:ERROR] pypy_module_thread_os_thread.c:3166:2: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_thread_start?: [translation:ERROR] rpython_memory_gctransform_asmgcroot.c:18:2: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_locate_caller_based_on_retaddr?: [translation:ERROR] rpython_memory_gctransform_asmgcroot.c:662:2: warning: passing argument 4 of ?qsort? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:55:0, [translation:ERROR] from common_header.h:38, [translation:ERROR] from rpython_memory_gctransform_asmgcroot.c:5: [translation:ERROR] /usr/include/stdlib.h:761:13: note: expected ?__compar_fn_t? but argument is of type ?int (*)(void *, void *)? [translation:ERROR] rpython_memory_gctransform_asmgcroot.c: In function ?pypy_g_belongs_to_current_thread?: [translation:ERROR] rpython_memory_gctransform_asmgcroot.c:1130:2: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_memory_gctransform_framework.c: In function ?pypy_g_setup_root_walker?: [translation:ERROR] rpython_memory_gctransform_framework.c:73:2: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_rlib__stacklet_asmgcc.c: In function ?pypy_g_StackletGcRootFinder_switch?: [translation:ERROR] rpython_rlib__stacklet_asmgcc.c:64:13: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_rlib__stacklet_asmgcc.c: In function ?pypy_g_StackletGcRootFinder_new?: [translation:ERROR] rpython_rlib__stacklet_asmgcc.c:172:13: warning: assignment makes pointer from integer without a cast [enabled by default] [translation:ERROR] rpython_rlib_rsocket.c: In function ?pypy_g_PacketAddress_get_addr?: [translation:ERROR] rpython_rlib_rsocket.c:9050:13: warning: pointer targets in assignment differ in signedness [-Wpointer-sign] [translation:ERROR] rpython_rtyper_lltypesystem_rffi.c: In function ?pypy_g__PyPy_dg_dtoa__Float_Signed_Signed_arrayPtr_arra?: [translation:ERROR] rpython_rtyper_lltypesystem_rffi.c:2070:2: warning: passing argument 4 of ?_PyPy_dg_dtoa? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:61:0, [translation:ERROR] from rpython_rtyper_lltypesystem_rffi.c:5: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.h:4:8: note: expected ?Signed *? but argument is of type ?int *? [translation:ERROR] rpython_rtyper_lltypesystem_rffi.c:2070:2: warning: passing argument 5 of ?_PyPy_dg_dtoa? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from common_header.h:61:0, [translation:ERROR] from rpython_rtyper_lltypesystem_rffi.c:5: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.h:4:8: note: expected ?Signed *? but argument is of type ?int *? [translation:ERROR] structseq.c: In function ?structseq_slice?: [translation:ERROR] structseq.c:89:9: warning: passing argument 1 of ?PyTuple_SetItem? from incompatible pointer type [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:132:0, [translation:ERROR] from structseq.c:4: [translation:ERROR] ../pypy_decl.h:403:17: note: expected ?struct PyObject *? but argument is of type ?struct PyTupleObject *? [translation:ERROR] In file included from ../module_cache/module_4.c:165:0: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.c:131:0: warning: "PyMem_Malloc" redefined [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:117:0, [translation:ERROR] from ../module_cache/module_4.c:34: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/pymem.h:8:0: note: this is the location of the previous definition [translation:ERROR] In file included from ../module_cache/module_4.c:165:0: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/src/dtoa.c:132:0: warning: "PyMem_Free" redefined [enabled by default] [translation:ERROR] In file included from /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/Python.h:117:0, [translation:ERROR] from ../module_cache/module_4.c:34: [translation:ERROR] /home/v3ss/Downloads/pypy-2.0.2-src/pypy/module/cpyext/include/pymem.h:9:0: note: this is the location of the previous definition [translation:ERROR] Traceback (most recent call last): [translation:ERROR] Traceback (most recent call last): [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/c/gcc/trackgcroot.py", line 2, in [translation:ERROR] import re, sys, os, random [translation:ERROR] import re, sys, os, random [translation:ERROR] import re, sys, os, random [translation:ERROR] File "/usr/lib/python2.7/random.py", line 45, in [translation:ERROR] File "/usr/lib/python2.7/random.py", line 45, in [translation:ERROR] File "/usr/lib/python2.7/random.py", line 45, in [translation:ERROR] import re, sys, os, random [translation:ERROR] File "/usr/lib/python2.7/random.py", line 45, in [translation:ERROR] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [translation:ERROR] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [translation:ERROR] from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [translation:ERROR] ImportError: from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil [translation:ERROR] ImportErrorImportError: : /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [translation:ERROR] /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf/usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [translation:ERROR] [translation:ERROR] ImportError: /usr/lib/python2.7/lib-dynload/math.so: undefined symbol: PyFPE_jbuf [translation:ERROR] make: *** [testing_1.gcmap] Error 1 [translation:ERROR] make: *** Waiting for unfinished jobs.... [translation:ERROR] make: *** [data_pypy_goal_targetpypystandalone.gcmap] Error 1 [translation:ERROR] make: *** [data_pypy_goal_targetpypystandalone_1.gcmap] Error 1 [translation:ERROR] make: *** [data_pypy_interpreter_argument.gcmap] Error 1 [translation:ERROR] """) [translation] start debugger... > /home/v3ss/Downloads/pypy-2.0.2-src/rpython/translator/platform/__init__.py(150)_handle_error() -> raise CompilationError(stdout, stderr) -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue May 28 16:04:23 2013 From: arigo at tunes.org (Armin Rigo) Date: Tue, 28 May 2013 16:04:23 +0200 Subject: [pypy-dev] Compilation failed at compile_c In-Reply-To: References: Message-ID: Hi Phyo, On Tue, May 28, 2013 at 2:49 PM, Phyo Arkar wrote: > [translation:ERROR] ImportError: /usr/lib/python2.7/lib-dynload/math.so: > undefined symbol: PyFPE_jbuf It looks like "import math" doesn't work on your custom built Python. A bient?t, Armin. From dirk.hunniger at googlemail.com Tue May 28 17:27:36 2013 From: dirk.hunniger at googlemail.com (=?ISO-8859-1?Q?Dirk_H=FCnniger?=) Date: Tue, 28 May 2013 17:27:36 +0200 Subject: [pypy-dev] RPython Message-ID: <51A4CCE8.3000003@googlemail.com> Hello, I am working on a compiler for MediaWiki to LaTeX. Currently it is written in Haskell and Python3. I feel very insecure about the Python part and I would feel much safer if I had static typechecking in the Python part. Still I want the Python part to be able to run with normal Python interpreter. So it seems to me that converting the code to RPython might solve this issue for. Everything else is Ok, in particular speed is not an issue. What do you think. Yours Dirk PS: link to my repository http://sourceforge.net/p/wb2pdf/code/HEAD/tree/trunk/src/ From amauryfa at gmail.com Tue May 28 17:54:32 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 28 May 2013 17:54:32 +0200 Subject: [pypy-dev] RPython In-Reply-To: <51A4CCE8.3000003@googlemail.com> References: <51A4CCE8.3000003@googlemail.com> Message-ID: Hello, 2013/5/28 Dirk H?nniger > Hello, > I am working on a compiler for MediaWiki to LaTeX. Currently it is written > in Haskell and Python3. I feel very insecure about the Python part and I > would feel much safer if I had static typechecking in the Python part. > Still I want the Python part to be able to run with normal Python > interpreter. So it seems to me that converting the code to RPython might > solve this issue for. Everything else is Ok, in particular speed is not an > issue. What do you think. > RPython is not the language you are looking for. No urllib, no xml, no codecs... open() is not even supported! If you want a statically typed language, write C or Java. But Python has strong type checks, only at runtime. I think you'd better write unit tests, or some scripts to exercise the various tools. Good code coverage will catch all typos and also many mistakes a compiler wouldn't tell you. Cheers, -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide.setti at gmail.com Tue May 28 17:57:49 2013 From: davide.setti at gmail.com (Davide Setti) Date: Tue, 28 May 2013 17:57:49 +0200 Subject: [pypy-dev] pypy In-Reply-To: References: Message-ID: On Tue, May 28, 2013 at 11:19 AM, Samir Tigrine wrote: > It is compatible with zope ? https://bitbucket.org/pypy/compatibility/wiki/Home#!frameworks-and-application-servers Regards -- Davide Setti code: http://github.com/vad From ddvento at ucar.edu Tue May 28 18:03:54 2013 From: ddvento at ucar.edu (Davide Del Vento) Date: Tue, 28 May 2013 10:03:54 -0600 Subject: [pypy-dev] RPython In-Reply-To: <51A4CCE8.3000003@googlemail.com> References: <51A4CCE8.3000003@googlemail.com> Message-ID: <51A4D56A.7000508@ucar.edu> > I am working on a compiler for MediaWiki to LaTeX. Currently it is > written in Haskell and Python3. I feel very insecure about the Python > part and I would feel much safer if I had static typechecking in the > Python part. Still I want the Python part to be able to run with normal > Python interpreter. So it seems to me that converting the code to > RPython might solve this issue for. Everything else is Ok, in particular > speed is not an issue. What do you think. You may want to use PyLint, PyChecker and/or PyFlakes not RPython Regards, Davide Del Vento From steve at pearwood.info Tue May 28 19:55:42 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 29 May 2013 03:55:42 +1000 Subject: [pypy-dev] RPython In-Reply-To: <51A4CCE8.3000003@googlemail.com> References: <51A4CCE8.3000003@googlemail.com> Message-ID: <51A4EF9E.3030905@pearwood.info> On 29/05/13 01:27, Dirk H?nniger wrote: > Hello, > I am working on a compiler for MediaWiki to LaTeX. Currently it is written in Haskell and Python3. I feel very insecure about the Python part and I would feel much safer if I had static typechecking in the Python part. Please read this article, it may help you feel better about dynamic typing: http://cdsmith.wordpress.com/2011/01/09/an-old-article-i-wrote/ -- Steven From arigo at tunes.org Thu May 30 10:23:17 2013 From: arigo at tunes.org (Armin Rigo) Date: Thu, 30 May 2013 10:23:17 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster Message-ID: Hi all, Some people learn about PyPy, and the first program they try to measure speed with is something like this: def factorial(n): res = 1 for i in range(1, n + 1): res *= i return res print factorial(25000) It may not be completely obvious a priori, but this is as bogus as it gets. This is by now only 50% slower in PyPy than in CPython thanks to efforts from various people. The issue is of course that it's an algo which, in CPython or in PyPy, spends most of its time in C code computing with rather large "long" objects. (No, PyPy doesn't contain magic to speed up C code 10 times.) In fact, this program spends more than 2/3rd of its time in the final repr() of the result! Converting a long to base 10 is a quadratic operation. Does it still make sense to add programs like this to our benchmarks? So far, our benchmarks are "real-life" examples. The benchmarks like above are completely missing the point of PyPy, as they don't stress at all the Python interpreter part. There are also other cases where PyPy's performance is very bad, like cpyext on an extension module with lots of small C API calls. I believe that it would still make sense to list such cases in the official benchmark, and have the descriptions of the benchmarks explain what's wrong with them. A bient?t, Armin. From estama at gmail.com Thu May 30 12:04:12 2013 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 30 May 2013 13:04:12 +0300 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: References: Message-ID: <51A7241C.70602@gmail.com> On 30/05/13 11:23, Armin Rigo wrote: > ... There are also other cases where > PyPy's performance is very bad, like cpyext on an extension module > with lots of small C API calls. I believe that it would still make > sense to list such cases in the official benchmark, and have the > descriptions of the benchmarks explain what's wrong with them. On the other hand, there are also valid benchmark cases with very bad performance. From the top of my mind, reading a unicode text file was around 10-12 times slower (if i remember correctly) last time that i checked. l. From njh at njhurst.com Thu May 30 18:41:02 2013 From: njh at njhurst.com (Nathan Hurst) Date: Fri, 31 May 2013 02:41:02 +1000 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: References: Message-ID: <20130530164102.GA13899@ajhurst.org> On Thu, May 30, 2013 at 10:23:17AM +0200, Armin Rigo wrote: > Hi all, > > Some people learn about PyPy, and the first program they try to > measure speed with is something like this: > > def factorial(n): > res = 1 > for i in range(1, n + 1): > res *= i > return res > print factorial(25000) > > It may not be completely obvious a priori, but this is as bogus as it > gets. This is by now only 50% slower in PyPy than in CPython thanks > to efforts from various people. The issue is of course that it's an > algo which, in CPython or in PyPy, spends most of its time in C code > computing with rather large "long" objects. (No, PyPy doesn't contain > magic to speed up C code 10 times.) In fact, this program spends more > than 2/3rd of its time in the final repr() of the result! Converting > a long to base 10 is a quadratic operation. It doesn't have to be quadratic, it's easy to come up with a splitting algorithm: def reclongtostr(x): if x < 0: return "-"+reclongtostr(-x) x = long(x) # expect a long min_digits = 9 # fits in 32 bits, there may be a better choice for this pts = [10**min_digits] while pts[-1] < x: pts.append(pts[-1]**2) pts.pop() # remove first 10**2**i greater than x output = [] def spl(x,i): if i < 0: # bottomed out with max_digit sized pieces if output or x > 0: s = str(x) output.append("0"*(min_digits - len(s)) + s) # note that this appends in inorder else: top,bot = divmod(x, pts[i]) # split the number spl(top,i-1) spl(bot,i-1) spl(x,len(pts)-1) # strip leading zeros, we can probably do this more elegantly while output[0][0] == "0": output[0] = output[0][1:] return ''.join(output) which benchmarks factorial(25000) like this: import time s = time.time() x = factorial(25000) print "factorial", time.time() - s sx = str(x) # give pypy a chance to compile s = time.time() sx = str(x) print "Str time", time.time() - s rsx = reclongtostr(x) # give pypy a chance to compile s = time.time() rsx = reclongtostr(x) print "my rec time", time.time() - s print "equal string:", sx == rsx factorial 0.182402133942 Str time 0.505062818527 my rec time 0.0678248405457 equal string: True I'm sure a better programmer than I could make this faster by avoiding saving intermediate results and various micro optimisations. But beating the builtin C implementation by a factor of 7.5 seems a reasonable outcome for pypy. I think I could come up with a linear time two pass algorithm working on intdigits if this were important to pypy. > Does it still make sense to add programs like this to our benchmarks? > So far, our benchmarks are "real-life" examples. The benchmarks like > above are completely missing the point of PyPy, as they don't stress > at all the Python interpreter part. There are also other cases where > PyPy's performance is very bad, like cpyext on an extension module > with lots of small C API calls. I believe that it would still make > sense to list such cases in the official benchmark, and have the > descriptions of the benchmarks explain what's wrong with them. I agree that you should include them, I disagree that they are 'wrong'. They measure the overhead of a C call. Why should a C call be slower in pypy than cpython? Presumably it could be compiled down to the appropriate instructions and then out-perform cpy. Now that the topic of benchmarks has come up, I came across this benchmark recently: http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html The same benchmark took 8.5s on pypy 2beta2 and takes 7.5s on pypy 2.0.1. Is there any obvious reasons why pypy's tasklets are so slow to switch? (Is it the scheduler?) This is important for my adoption of pypy at work. njh From amauryfa at gmail.com Thu May 30 19:48:10 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 30 May 2013 19:48:10 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: <20130530164102.GA13899@ajhurst.org> References: <20130530164102.GA13899@ajhurst.org> Message-ID: 2013/5/30 Nathan Hurst > > Does it still make sense to add programs like this to our benchmarks? > > So far, our benchmarks are "real-life" examples. The benchmarks like > > above are completely missing the point of PyPy, as they don't stress > > at all the Python interpreter part. There are also other cases where > > PyPy's performance is very bad, like cpyext on an extension module > > with lots of small C API calls. I believe that it would still make > > sense to list such cases in the official benchmark, and have the > > descriptions of the benchmarks explain what's wrong with them. > > I agree that you should include them, I disagree that they are > 'wrong'. They measure the overhead of a C call. Why should a C call > be slower in pypy than cpython? Presumably it could be compiled down > to the appropriate instructions and then out-perform cpy. > The C API here is the one of the CPython interpreter (PyLong_FromLong &co) To support it PyPy has to emulate many aspects, specially the fact that pypy objects are movable memory, and a PyObject* pointer is not supposed to change. To get fair benchmarks, those extension modules should be rewritten, with cffi for example: its C calls have very little overhead, and it integrates very well with the rest of the PyPy interpreter. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri May 31 11:43:15 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 31 May 2013 11:43:15 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: <20130530164102.GA13899@ajhurst.org> References: <20130530164102.GA13899@ajhurst.org> Message-ID: Hi Nathan, On Thu, May 30, 2013 at 6:41 PM, Nathan Hurst wrote: > It doesn't have to be quadratic, it's easy to come up with a splitting > algorithm: I believe that you're right on one point and wrong on another. You're right in that this gives a faster algo for str(). You're wrong in that it's still quadratic. If 'a' has 2N digits and 'b' has N digits, then divmod(a,b) is quadratic --- takes time proportional to N*N. It can be shown by measuring the time spent by your algo to do the repr of larger and larger numbers. > beating the builtin C implementation by a factor of 7.5 seems a > reasonable outcome for pypy. No, precisely my point: this argument is bogus. The proof that it's wrong is that CPython gets very similar timing results! Your pure Python version outperforms the C str(long) in a very similar way on PyPy and on CPython! The "bug" is originally in CPython, for having a str() that is too slow, and I just copied it into PyPy. The pure Python version you posted is faster. Its speed is roughly the same on CPython and on PyPy because most of the time is spent doing divmod on large "long" objects (which is this post's original point). > I think I could come up with a linear time two pass algorithm working > on intdigits if this were important to pypy. That would be interesting for both PyPy and CPython. A bient?t, Armin. From cfbolz at gmx.de Fri May 31 12:01:27 2013 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Fri, 31 May 2013 12:01:27 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: References: <20130530164102.GA13899@ajhurst.org> Message-ID: Hi Armin, I have only glanced at the code, but isn't the right argument of the divmod always a power of two? So it can be replaced by a shift and a mask, giving the right complexity. Cheers, Carl Friedrich Armin Rigo wrote: >Hi Nathan, > >On Thu, May 30, 2013 at 6:41 PM, Nathan Hurst wrote: >> It doesn't have to be quadratic, it's easy to come up with a >splitting >> algorithm: > >I believe that you're right on one point and wrong on another. You're >right in that this gives a faster algo for str(). You're wrong in >that it's still quadratic. If 'a' has 2N digits and 'b' has N digits, >then divmod(a,b) is quadratic --- takes time proportional to N*N. It >can be shown by measuring the time spent by your algo to do the repr >of larger and larger numbers. > >> beating the builtin C implementation by a factor of 7.5 seems a >> reasonable outcome for pypy. > >No, precisely my point: this argument is bogus. The proof that it's >wrong is that CPython gets very similar timing results! Your pure >Python version outperforms the C str(long) in a very similar way on >PyPy and on CPython! The "bug" is originally in CPython, for having a >str() that is too slow, and I just copied it into PyPy. The pure >Python version you posted is faster. Its speed is roughly the same on >CPython and on PyPy because most of the time is spent doing divmod on >large "long" objects (which is this post's original point). > >> I think I could come up with a linear time two pass algorithm working >> on intdigits if this were important to pypy. > >That would be interesting for both PyPy and CPython. > > >A bient?t, > >Armin. >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bokr at oz.net Fri May 31 15:01:03 2013 From: bokr at oz.net (Bengt Richter) Date: Fri, 31 May 2013 15:01:03 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: <20130530164102.GA13899@ajhurst.org> References: <20130530164102.GA13899@ajhurst.org> Message-ID: On 05/30/2013 06:41 PM Nathan Hurst wrote: > On Thu, May 30, 2013 at 10:23:17AM +0200, Armin Rigo wrote: >> Hi all, >> >> Some people learn about PyPy, and the first program they try to >> measure speed with is something like this: >> >> def factorial(n): >> res = 1 >> for i in range(1, n + 1): >> res *= i >> return res >> print factorial(25000) >> >> It may not be completely obvious a priori, but this is as bogus as it >> gets. This is by now only 50% slower in PyPy than in CPython thanks >> to efforts from various people. The issue is of course that it's an >> algo which, in CPython or in PyPy, spends most of its time in C code >> computing with rather large "long" objects. (No, PyPy doesn't contain >> magic to speed up C code 10 times.) In fact, this program spends more >> than 2/3rd of its time in the final repr() of the result! Converting >> a long to base 10 is a quadratic operation. > > It doesn't have to be quadratic, it's easy to come up with a splitting > algorithm: > def reclongtostr(x): > if x< 0: return "-"+reclongtostr(-x) > x = long(x) # expect a long > min_digits = 9 # fits in 32 bits, there may be a better choice for > this > pts = [10**min_digits] > while pts[-1]< x: > pts.append(pts[-1]**2) > pts.pop() # remove first 10**2**i greater than x > output = [] > def spl(x,i): > if i< 0: # bottomed out with max_digit sized pieces > if output or x> 0: > s = str(x) > output.append("0"*(min_digits - len(s)) + s) # note > that this appends in inorder > else: > top,bot = divmod(x, pts[i]) # split the number > spl(top,i-1) > spl(bot,i-1) > spl(x,len(pts)-1) > # strip leading zeros, we can probably do this more elegantly > while output[0][0] == "0": > output[0] = output[0][1:] > return ''.join(output) > > which benchmarks factorial(25000) like this: > > import time > s = time.time() > x = factorial(25000) > print "factorial", time.time() - s > sx = str(x) # give pypy a chance to compile > s = time.time() > sx = str(x) > print "Str time", time.time() - s > rsx = reclongtostr(x) # give pypy a chance to compile > s = time.time() > rsx = reclongtostr(x) > print "my rec time", time.time() - s > print "equal string:", sx == rsx > > factorial 0.182402133942 > Str time 0.505062818527 > my rec time 0.0678248405457 > equal string: True > > > I'm sure a better programmer than I could make this faster by avoiding > saving intermediate results and various micro optimisations. But > beating the builtin C implementation by a factor of 7.5 seems a > reasonable outcome for pypy. > > I think I could come up with a linear time two pass algorithm working > on intdigits if this were important to pypy. > >> Does it still make sense to add programs like this to our benchmarks? >> So far, our benchmarks are "real-life" examples. The benchmarks like >> above are completely missing the point of PyPy, as they don't stress >> at all the Python interpreter part. There are also other cases where >> PyPy's performance is very bad, like cpyext on an extension module >> with lots of small C API calls. I believe that it would still make >> sense to list such cases in the official benchmark, and have the >> descriptions of the benchmarks explain what's wrong with them. > > I agree that you should include them, I disagree that they are > 'wrong'. They measure the overhead of a C call. Why should a C call > be slower in pypy than cpython? Presumably it could be compiled down > to the appropriate instructions and then out-perform cpy. > > Now that the topic of benchmarks has come up, I came across this > benchmark recently: > http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html > > The same benchmark took 8.5s on pypy 2beta2 and takes 7.5s on pypy > 2.0.1. Is there any obvious reasons why pypy's tasklets are so > slow to switch? (Is it the scheduler?) This is important for my > adoption of pypy at work. > > njh I remember doing something similar to convert long to string, way back in .. let's see .. googling google comp.lang.python .. oy, 2001, it's been a while ;-) https://groups.google.com/forum/?hl=sv&fromgroups#!searchin/comp.lang.python/richter$20strL.py/comp.lang.python/6HYHojX7ZlA/Wizytwby71QJ (from top, search to first instance of strL.py) I guess it's not too long to post a copy here: .. (Hm, I see I should have imported time from time instead of clock, since the latter has just scheduler tick resolution, and time is high resolution.) ____________________________________________________ A couple of lines wrapped... _______________________________________________________________ # strL.py -- recursive long to decimal string conversion # 2001-07-10 bokr # p10d={} def p10(n): if not p10d.has_key(n): p = p10d[n] = 10L**n return p return p10d[n] def strLR(n,w=0): # w>0 is known width, w<0 is searching guess if w == 0: return [] if w > 0: if w <=9: return [('%0'+chr(w+48)+'d') % n] wr = w/2 nl,nr = divmod(n,p10(wr)) return strLR(nl,w-wr)+strLR(nr,wr) else: nl,nr = divmod(n,p10(-w)) if nl: return strLR(nl, 2*w) + strLR(nr,-w) else: if w >= -9: return ['%d' % n] else: return strLR(nr,w/2) def strL(n): if n<0: return ''.join(['-']+strLR(-n,-9)) else: return ''.join(strLR(n,-9)) from time import clock import sys def main(): def pr(x): print x def psl(x): s = strL(x) sys.stdout.write(s) sys.stdout.write('\n') dt={'str':str,'strL':strL,'repr':repr, 'print':pr, 'printStrL':psl } try: x=long( eval(sys.argv[2]) ) fn=sys.argv[1] fcn=dt[fn] except: sys.stderr.write("usage: %s [str strL repr print printStrL] \n" % sys.argv[0]) sys.exit(2) t0=clock() fcn(x) t1=clock() print "%s took %9.6f seconds" % (fn,t1-t0) if __name__ == "__main__": main() ____________________________________________________ Got curious, so I added to your benchmark: from strL import strL ## bokr mod and ## bokr mod: add timing of strL, like above rsb = strL(x) # give pypy a chance to compile s = time.time() rsx = strL(x) print "strL rec time", time.time() - s print "equal string:", sx == rsb then ran it (with old python and pypy on old laptop: Version info: [14:49 ~/wk/py/clp]$ python Python 2.7.2 (default, Jul 8 2011, 23:38:53) [GCC 4.1.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> [14:49 ~/wk/py/clp]$ pypy pypy: /usr/lib/libcrypto.so.0.9.8: no version information available (required by pypy) pypy: /usr/lib/libssl.so.0.9.8: no version information available (required by pypy) Python 2.7.1 (b590cf6de419, Apr 30 2011, 02:00:38) [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``nothing is true'' >>>> The bench run: [14:49 ~/wk/py/clp]$ python NathanHurst.py factorial 1.36516094208 Str time 2.93479013443 my rec time 1.45956683159 equal string: True strL rec time 1.34501504898 equal string: True [14:50 ~/wk/py/clp]$ pypy NathanHurst.py pypy: /usr/lib/libcrypto.so.0.9.8: no version information available (required by pypy) pypy: /usr/lib/libssl.so.0.9.8: no version information available (required by pypy) factorial 2.29024791718 Str time 3.14243102074 my rec time 1.25054502487 equal string: True strL rec time 1.12671113014 equal string: True [14:50 ~/wk/py/clp]$ ... Hm, wonder why your factorial was slower on pypy .. prob due to old version? Regards, Bengt Richter From arigo at tunes.org Fri May 31 16:18:25 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 31 May 2013 16:18:25 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: References: <20130530164102.GA13899@ajhurst.org> Message-ID: Hi Bengt, On Fri, May 31, 2013 at 3:01 PM, Bengt Richter wrote: > [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 > > ... Hm, wonder why your factorial was slower on pypy .. prob due to old > version? Another benchmark that completely misses the point, yay! This one shows that we improved quite a bit since PyPy 1.5.0-alpha0, which was literally ages ago. A bient?t, Armin. From arigo at tunes.org Fri May 31 16:15:58 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 31 May 2013 16:15:58 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: References: <20130530164102.GA13899@ajhurst.org> Message-ID: Hi Carl Friedrich, On Fri, May 31, 2013 at 12:01 PM, Carl Friedrich Bolz wrote: > I have only glanced at the code, but isn't the right argument of the divmod > always a power of two? So it can be replaced by a shift and a mask, giving > the right complexity. It's a power of 10. A bient?t, Armin. From njh at njhurst.com Fri May 31 17:48:45 2013 From: njh at njhurst.com (Nathan Hurst) Date: Sat, 1 Jun 2013 01:48:45 +1000 Subject: [pypy-dev] PyPy doesn't make code written in C faster Message-ID: <20130531154845.GA28495@ajhurst.org> Sent to Carl only by mistake, I'm still getting the hang of this newfangled email thing... Carl said > Armin said > >No, precisely my point: this argument is bogus. The proof that it's > >wrong is that CPython gets very similar timing results! Your pure > >Python version outperforms the C str(long) in a very similar way on > >PyPy and on CPython! The "bug" is originally in CPython, for having a > >str() that is too slow, and I just copied it into PyPy. The pure > >Python version you posted is faster. Its speed is roughly the same on > >CPython and on PyPy because most of the time is spent doing divmod on > >large "long" objects (which is this post's original point). divmod in principle can be done in O(multiplication log* n), which for large numbers can be O(n log n). I don't know whether py*'s implemention does this. How do you determine where the bottlenecks are? > >I believe that you're right on one point and wrong on another. You're > >right in that this gives a faster algo for str(). You're wrong in > >that it's still quadratic. If 'a' has 2N digits and 'b' has N digits, > >then divmod(a,b) is quadratic --- takes time proportional to N*N. It > >can be shown by measuring the time spent by your algo to do the repr > >of larger and larger numbers. For what it's worth, the time for str(long) / time for recursive algorithm does decrease steadily for increasing input lengths. But it's not n^2 / n log n. Perhaps we need to implement a faster divmod? Can I assume that bit_length() is O(1)? I checked Knuth last night, he doesn't have anything to say in the main text, but he says in exercises: (II.4.4) 14. [M27] (A. Schonhage.) The text's method of converting multiple-precision integers requires an execution time of order n^2 to convert an n-place integer, when n is large. Show that it is possible to convert n-digit decimal integers into binary notation in O(M(n)logn) steps, where M(n) is an upper bound on the number of steps needed to multiply n-bit binary numbers that satisfies the ???smoothness condition??? M(2n) >= 2M(n). 15. [M47] Can the upper bound on the time to convert large integers, given in exercise 14, be substantially lowered? which fits my intuition. I'm fairly sure that my (and bokr's) algorithm is O(M(n)logn). It also suggests I'm wrong about a linear time algorithm (though the question doesn't state either way, but given it's M47 hardness, I'm probably not going to be able to whip this up tonight :) Lloyd Allison observed that changing any bit can affect any digits, which makes beating n log n less likely. One of the main reasons I use python is because it lets me concentrate on the higher level algorithms (like haskell), but it is pragmatic about the way we tend to write programs (unlike haskell :). I doubt I could have written that algorithm in C before breakfast (as I did for the python version). But to the main point: is it fair for people to compare code which doesn't get the benefit of pypy? Yes it is. Because the majority of code out there today is going to have C calls. Sure, pypy will lose on those, but that provides incentive to fix the problem - for example, implementing a better long to string. People are going to write silly benchmarks and they are going to solve problems in silly ways. We should be honest about this in the benchmarks. Don't worry, pypy will do just fine. On Fri, May 31, 2013 at 12:01:27PM +0200, Carl Friedrich Bolz wrote: > Hi Armin, > > I have only glanced at the code, but isn't the right argument of the divmod always a power of two? So it can be replaced by a shift and a mask, giving the right complexity. > No, it's a power of 10, 10^2^i in fact. njh > Cheers, > > Carl Friedrich ----- End forwarded message ----- From arigo at tunes.org Fri May 31 18:05:57 2013 From: arigo at tunes.org (Armin Rigo) Date: Fri, 31 May 2013 18:05:57 +0200 Subject: [pypy-dev] PyPy doesn't make code written in C faster In-Reply-To: <20130531154845.GA28495@ajhurst.org> References: <20130531154845.GA28495@ajhurst.org> Message-ID: Hi Nathan, On Fri, May 31, 2013 at 5:48 PM, Nathan Hurst wrote: > For what it's worth, the time for str(long) / time for recursive > algorithm does decrease steadily for increasing input lengths. But > it's not n^2 / n log n. Perhaps we need to implement a faster divmod? Actually I found an old but still in-progress bug report for CPython: http://bugs.python.org/issue3451 It contains all these questions and more. CPython seems mainly blocked by lack of man-power sufficient to review all the reference counting mess. That seems like a good reason to write the algorithms in PyPy first :-) > But to the main point: is it fair for people to compare code which > doesn't get the benefit of pypy? Yes it is. Because the majority of > code out there today is going to have C calls. Sure, pypy will lose > on those, but that provides incentive to fix the problem - for > example, implementing a better long to string. > > People are going to write silly benchmarks and they are going to solve > problems in silly ways. We should be honest about this in the > benchmarks. Don't worry, pypy will do just fine. Yes, I believe you are right. We should really add these kinds of benchmarks too. It's a trap that is natural to fall into, and we should continue to motivate the reasons for why PyPy is not 5 times faster than CPython here. A bient?t, Armin.