From arigo at tunes.org Mon Sep 1 12:20:03 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 1 Sep 2014 12:20:03 +0200 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Hi Alex, On 30 August 2014 08:43, Armin Rigo wrote: > went down from 5.8ms per loop to 208us. However, I see that running > the same example with 10000 ascii chars went up from 41.4us to 139us. > Time to tweak. Tweaked! See 65ac482d28d6. This was because if you used this kind of code in RPython: for c in unistring: if c >= u' ': ... then the comparison works, but is done by converting the character back to a full unicode string and calling ll_strcmp()... so the big overhead was caused half by the conversion costs and half by the extra GC pressure. A bient?t, Armin. From arigo at tunes.org Mon Sep 1 12:31:23 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 1 Sep 2014 12:31:23 +0200 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Re-hi, On 1 September 2014 12:20, Armin Rigo wrote: > Tweaked! Note that the final result is 33% faster in your example. Armin From alex.gaynor at gmail.com Mon Sep 1 16:21:52 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 1 Sep 2014 07:21:52 -0700 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Wow, nice catch! Alex On Mon, Sep 1, 2014 at 3:31 AM, Armin Rigo wrote: > Re-hi, > > On 1 September 2014 12:20, Armin Rigo wrote: > > Tweaked! > > Note that the final result is 33% faster in your example. > > > Armin > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Mon Sep 1 17:06:46 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Mon, 1 Sep 2014 21:36:46 +0630 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Thanks a lot. I haven't get to test lastest commit yet. So in that , attached benchmark in pypy is running faster than python now? On Mon, Sep 1, 2014 at 5:01 PM, Armin Rigo wrote: > Re-hi, > > On 1 September 2014 12:20, Armin Rigo wrote: >> Tweaked! > > Note that the final result is 33% faster in your example. > > > Armin From arigo at tunes.org Mon Sep 1 17:54:23 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 1 Sep 2014 17:54:23 +0200 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Hi, On 1 September 2014 17:06, Phyo Arkar wrote: > Thanks a lot. I haven't get to test lastest commit yet. > So in that , attached benchmark in pypy is running faster than python now? Yes, for the utf-8 test (the tests with "double" didn't change). Here is what I get on Linux 64: $ pypy-c-r73264-jit benchmark.py Array with 256 doubles: simplejson encode : 3871.15257 calls/sec simplejson decode : 14651.04979 calls/sec Array with 256 utf-8 strings: simplejson encode UTF : 1393.29238 calls/sec simplejson decode UTF : 276.03465 calls/sec $ python benchmark.py # 2.7.3 Array with 256 doubles: simplejson encode : 3469.44902 calls/sec simplejson decode : 12419.69240 calls/sec Array with 256 utf-8 strings: simplejson encode UTF : 1278.32368 calls/sec simplejson decode UTF : 355.18049 calls/sec A bient?t, Armin. From phyo.arkarlwin at gmail.com Tue Sep 2 14:24:12 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Tue, 2 Sep 2014 18:54:12 +0630 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Very nice. Pypy is the future. On Sep 1, 2014 10:25 PM, "Armin Rigo" wrote: > Hi, > > On 1 September 2014 17:06, Phyo Arkar wrote: > > Thanks a lot. I haven't get to test lastest commit yet. > > So in that , attached benchmark in pypy is running faster than python > now? > > Yes, for the utf-8 test (the tests with "double" didn't change). Here > is what I get on Linux 64: > > $ pypy-c-r73264-jit benchmark.py > > Array with 256 doubles: > simplejson encode : 3871.15257 calls/sec > simplejson decode : 14651.04979 calls/sec > Array with 256 utf-8 strings: > simplejson encode UTF : 1393.29238 calls/sec > simplejson decode UTF : 276.03465 calls/sec > > $ python benchmark.py # 2.7.3 > > Array with 256 doubles: > simplejson encode : 3469.44902 calls/sec > simplejson decode : 12419.69240 calls/sec > Array with 256 utf-8 strings: > simplejson encode UTF : 1278.32368 calls/sec > simplejson decode UTF : 355.18049 calls/sec > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Wed Sep 3 12:13:16 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Wed, 3 Sep 2014 16:43:16 +0630 Subject: [pypy-dev] Pypy Benchmark of Tornado. Message-ID: It just return json document with a few thousand characters (1053 bytes) $siege -c 400 -t 20s -r 2000 http://localhost:9999/js Python 2.7.7: Lifting the server siege... done. Transactions: 14478 hits Availability: 100.00 % Elapsed time: 19.10 secs Data transferred: 14.54 MB Response time: 0.01 secs Transaction rate: 758.01 trans/sec Throughput: 0.76 MB/sec Concurrency: 8.91 Successful transactions: 14478 Failed transactions: 0 Longest transaction: 1.08 seconds Shortest transaction: 0.00 pypy-2.3.1 stable: Transactions: 15149 hits Availability: 100.00 % Elapsed time: 19.63 secs Data transferred: 15.21 MB Response time: 0.02 secs Transaction rate: 771.73 trans/sec Throughput: 0.77 MB/sec Concurrency: 11.92 Successful transactions: 15149 Failed transactions: 0 Longest transaction: 1.09 seconds Shortest transaction: 0.00 pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) Lifting the server siege... done. Transactions: 14361 hits Availability: 100.00 % Elapsed time: 19.13 secs Data transferred: 14.42 MB Response time: 0.03 secs Transaction rate: 750.71 trans/sec Throughput: 0.75 MB/sec Concurrency: 21.53 Successful transactions: 14361 Failed transactions: 0 Longest transaction: 3.03 seconds Shortest transaction: 0.00 Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: benchmark_tornado.py Type: text/x-python Size: 1143 bytes Desc: not available URL: From phyo.arkarlwin at gmail.com Wed Sep 3 17:21:59 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Wed, 3 Sep 2014 21:51:59 +0630 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: I expect pypy to be faster in those cases but select io is not cpu intensive thing to do so no real benefit using pypy here i guess. On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar wrote: > > It just return json document with a few thousand characters (1053 bytes) > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js > > Python 2.7.7: > > Lifting the server siege... done. > > Transactions: 14478 hits > Availability: 100.00 % > Elapsed time: 19.10 secs > Data transferred: 14.54 MB > Response time: 0.01 secs > Transaction rate: 758.01 trans/sec > Throughput: 0.76 MB/sec > Concurrency: 8.91 > Successful transactions: 14478 > Failed transactions: 0 > Longest transaction: 1.08 seconds > Shortest transaction: 0.00 > > pypy-2.3.1 stable: > > Transactions: 15149 hits > Availability: 100.00 % > Elapsed time: 19.63 secs > Data transferred: 15.21 MB > Response time: 0.02 secs > Transaction rate: 771.73 trans/sec > Throughput: 0.77 MB/sec > Concurrency: 11.92 > Successful transactions: 15149 > Failed transactions: 0 > Longest transaction: 1.09 seconds > Shortest transaction: 0.00 > > > > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) > > Lifting the server siege... done. > > Transactions: 14361 hits > Availability: 100.00 % > Elapsed time: 19.13 secs > Data transferred: 14.42 MB > Response time: 0.03 secs > Transaction rate: 750.71 trans/sec > Throughput: 0.75 MB/sec > Concurrency: 21.53 > Successful transactions: 14361 > Failed transactions: 0 > Longest transaction: 3.03 seconds > Shortest transaction: 0.00 > > > > > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. > It just return json document with a few thousand characters (1053 bytes) $siege -c 400 -t 20s -r 2000 http://localhost:9999/js Python 2.7.7: Lifting the server siege... done. Transactions: 14478 hits Availability: 100.00 % Elapsed time: 19.10 secs Data transferred: 14.54 MB Response time: 0.01 secs Transaction rate: 758.01 trans/sec Throughput: 0.76 MB/sec Concurrency: 8.91 Successful transactions: 14478 Failed transactions: 0 Longest transaction: 1.08 seconds Shortest transaction: 0.00 pypy-2.3.1 stable: Transactions: 15149 hits Availability: 100.00 % Elapsed time: 19.63 secs Data transferred: 15.21 MB Response time: 0.02 secs Transaction rate: 771.73 trans/sec Throughput: 0.77 MB/sec Concurrency: 11.92 Successful transactions: 15149 Failed transactions: 0 Longest transaction: 1.09 seconds Shortest transaction: 0.00 pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) Lifting the server siege... done. Transactions: 14361 hits Availability: 100.00 % Elapsed time: 19.13 secs Data transferred: 14.42 MB Response time: 0.03 secs Transaction rate: 750.71 trans/sec Throughput: 0.75 MB/sec Concurrency: 21.53 Successful transactions: 14361 Failed transactions: 0 Longest transaction: 3.03 seconds Shortest transaction: 0.00 Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ben.Jolitz at acxiom.com Thu Sep 4 00:26:28 2014 From: Ben.Jolitz at acxiom.com (Jolitz Ben - bjolit) Date: Wed, 3 Sep 2014 22:26:28 +0000 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: I use Tornado and have found PyPy can yield a 30-50% performance increase for a moderately complex project. Ben From: Phyo Arkar > Date: Wednesday, September 3, 2014 at 8:21 AM To: pypy-dev > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. I expect pypy to be faster in those cases but select io is not cpu intensive thing to do so no real benefit using pypy here i guess. On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar > wrote: > > It just return json document with a few thousand characters (1053 bytes) > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js > > Python 2.7.7: > > Lifting the server siege... done. > > Transactions: 14478 hits > Availability: 100.00 % > Elapsed time: 19.10 secs > Data transferred: 14.54 MB > Response time: 0.01 secs > Transaction rate: 758.01 trans/sec > Throughput: 0.76 MB/sec > Concurrency: 8.91 > Successful transactions: 14478 > Failed transactions: 0 > Longest transaction: 1.08 seconds > Shortest transaction: 0.00 > > pypy-2.3.1 stable: > > Transactions: 15149 hits > Availability: 100.00 % > Elapsed time: 19.63 secs > Data transferred: 15.21 MB > Response time: 0.02 secs > Transaction rate: 771.73 trans/sec > Throughput: 0.77 MB/sec > Concurrency: 11.92 > Successful transactions: 15149 > Failed transactions: 0 > Longest transaction: 1.09 seconds > Shortest transaction: 0.00 > > > > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) > > Lifting the server siege... done. > > Transactions: 14361 hits > Availability: 100.00 % > Elapsed time: 19.13 secs > Data transferred: 14.42 MB > Response time: 0.03 secs > Transaction rate: 750.71 trans/sec > Throughput: 0.75 MB/sec > Concurrency: 21.53 > Successful transactions: 14361 > Failed transactions: 0 > Longest transaction: 3.03 seconds > Shortest transaction: 0.00 > > > > > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. > It just return json document with a few thousand characters (1053 bytes) $siege -c 400 -t 20s -r 2000 http://localhost:9999/js Python 2.7.7: Lifting the server siege... done. Transactions: 14478 hits Availability: 100.00 % Elapsed time: 19.10 secs Data transferred: 14.54 MB Response time: 0.01 secs Transaction rate: 758.01 trans/sec Throughput: 0.76 MB/sec Concurrency: 8.91 Successful transactions: 14478 Failed transactions: 0 Longest transaction: 1.08 seconds Shortest transaction: 0.00 pypy-2.3.1 stable: Transactions: 15149 hits Availability: 100.00 % Elapsed time: 19.63 secs Data transferred: 15.21 MB Response time: 0.02 secs Transaction rate: 771.73 trans/sec Throughput: 0.77 MB/sec Concurrency: 11.92 Successful transactions: 15149 Failed transactions: 0 Longest transaction: 1.09 seconds Shortest transaction: 0.00 pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) Lifting the server siege... done. Transactions: 14361 hits Availability: 100.00 % Elapsed time: 19.13 secs Data transferred: 14.42 MB Response time: 0.03 secs Transaction rate: 750.71 trans/sec Throughput: 0.75 MB/sec Concurrency: 21.53 Successful transactions: 14361 Failed transactions: 0 Longest transaction: 3.03 seconds Shortest transaction: 0.00 Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. *************************************************************************** The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank You. **************************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From lac at openend.se Thu Sep 4 19:27:08 2014 From: lac at openend.se (Laura Creighton) Date: Thu, 04 Sep 2014 19:27:08 +0200 Subject: [pypy-dev] Python Obfuscation Challenge (fwd) Message-ID: <201409041727.s84HR853023328@fido.openend.se> Not important but I just received this in the mail. Those of you who read python-announce will have got it as well. ------- Forwarded Message Return-Path: From: Serge Guelton To: python-announce-list at python.org Subject: Python Obfuscation Challenge Message-ID: <20140904095459.GA13895 at lakota> Reply-To: python-list at python.org List-Id: Announcement-only list for the Python programming language Hi all, The QuarksLab[0] company just released a Capture The Flag challenge with an emphasise on Python and CPython: http://blog.quarkslab.com/you-like-python-security-challenge-and-traveling-win-a-free-ticket-to-hitb-kul.html There are a few free tickets to the HITB conference[1] to win, so unleash the hacker in you! enjoy, [0] I am indeed an employee of QuarksLab :-/ [1] https://conference.hitb.org/hitbsecconf2014kul - -- https://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/ ------- End of Forwarded Message 1. s/emphasise/emphasis/ 2. hmmm. so are we the real python now in the eyes of the world? python and CPython, they say ... 3. Anybody want to visit Kuala Lampur? Just wanted to mention it, Laura From phyo.arkarlwin at gmail.com Thu Sep 4 22:55:52 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 5 Sep 2014 03:25:52 +0630 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: Thanks alot Ben, Ok , as PyPy is choice of quora and they also use tornado , i might keep testing on larget projects. How about mongodb performance on PyPy , i heard its slower due to no C Extension (no CFFI) for pypy. Your suggestion will be very appreciated. On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit wrote: > I use Tornado and have found PyPy can yield a 30-50% performance > increase for a moderately complex project. > > Ben > > From: Phyo Arkar > Date: Wednesday, September 3, 2014 at 8:21 AM > To: pypy-dev > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. > > I expect pypy to be faster in those cases but select io is not cpu > intensive thing to do so no real benefit using pypy here i guess. > On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar > wrote: > > > > It just return json document with a few thousand characters (1053 bytes) > > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js > > > > Python 2.7.7: > > > > Lifting the server siege... done. > > > > Transactions: 14478 hits > > Availability: 100.00 % > > Elapsed time: 19.10 secs > > Data transferred: 14.54 MB > > Response time: 0.01 secs > > Transaction rate: 758.01 trans/sec > > Throughput: 0.76 MB/sec > > Concurrency: 8.91 > > Successful transactions: 14478 > > Failed transactions: 0 > > Longest transaction: 1.08 seconds > > Shortest transaction: 0.00 > > > > pypy-2.3.1 stable: > > > > Transactions: 15149 hits > > Availability: 100.00 % > > Elapsed time: 19.63 secs > > Data transferred: 15.21 MB > > Response time: 0.02 secs > > Transaction rate: 771.73 trans/sec > > Throughput: 0.77 MB/sec > > Concurrency: 11.92 > > Successful transactions: 15149 > > Failed transactions: 0 > > Longest transaction: 1.09 seconds > > Shortest transaction: 0.00 > > > > > > > > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) > > > > Lifting the server siege... done. > > > > Transactions: 14361 hits > > Availability: 100.00 % > > Elapsed time: 19.13 secs > > Data transferred: 14.42 MB > > Response time: 0.03 secs > > Transaction rate: 750.71 trans/sec > > Throughput: 0.75 MB/sec > > Concurrency: 21.53 > > Successful transactions: 14361 > > Failed transactions: 0 > > Longest transaction: 3.03 seconds > > Shortest transaction: 0.00 > > > > > > > > > > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally > those requests (in Cpython) are only ~0.001 to 0.002 sec. > > > It just return json document with a few thousand characters (1053 bytes) > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js > > Python 2.7.7: > > Lifting the server siege... done. > > Transactions: 14478 hits > Availability: 100.00 % > Elapsed time: 19.10 secs > Data transferred: 14.54 MB > Response time: 0.01 secs > Transaction rate: 758.01 trans/sec > Throughput: 0.76 MB/sec > Concurrency: 8.91 > Successful transactions: 14478 > Failed transactions: 0 > Longest transaction: 1.08 seconds > Shortest transaction: 0.00 > > pypy-2.3.1 stable: > > Transactions: 15149 hits > Availability: 100.00 % > Elapsed time: 19.63 secs > Data transferred: 15.21 MB > Response time: 0.02 secs > Transaction rate: 771.73 trans/sec > Throughput: 0.77 MB/sec > Concurrency: 11.92 > Successful transactions: 15149 > Failed transactions: 0 > Longest transaction: 1.09 seconds > Shortest transaction: 0.00 > > > > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) > > Lifting the server siege... done. > > Transactions: 14361 hits > Availability: 100.00 % > Elapsed time: 19.13 secs > Data transferred: 14.42 MB > Response time: 0.03 secs > Transaction rate: 750.71 trans/sec > Throughput: 0.75 MB/sec > Concurrency: 21.53 > Successful transactions: 14361 > Failed transactions: 0 > Longest transaction: 3.03 seconds > Shortest transaction: 0.00 > > > > > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally > those requests (in Cpython) are only ~0.001 to 0.002 sec. > > > *************************************************************************** > The information contained in this communication is confidential, is > intended only for the use of the recipient named above, and may be legally > privileged. > > If the reader of this message is not the intended recipient, you are > hereby notified that any dissemination, distribution or copying of this > communication is strictly prohibited. > > If you have received this communication in error, please resend this > communication to the sender and delete the original message or any copy > of it from your computer system. > > Thank You. > > **************************************************************************** > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bokr at oz.net Fri Sep 5 01:44:58 2014 From: bokr at oz.net (Bengt Richter) Date: Fri, 05 Sep 2014 01:44:58 +0200 Subject: [pypy-dev] Python Obfuscation Challenge (fwd) In-Reply-To: <201409041727.s84HR853023328@fido.openend.se> References: <201409041727.s84HR853023328@fido.openend.se> Message-ID: <5408F97A.8020900@oz.net> On 09/04/2014 07:27 PM Laura Creighton wrote: > Not important but I just received this in the mail. Those of you > who read python-announce will have got it as well. > > ------- Forwarded Message > > Return-Path: > From: Serge Guelton > To: python-announce-list at python.org > Subject: Python Obfuscation Challenge > Message-ID: <20140904095459.GA13895 at lakota> > Reply-To: python-list at python.org > List-Id: Announcement-only list for the Python programming language > > > Hi all, > > The QuarksLab[0] company just released a Capture The Flag challenge with > an emphasise on Python and CPython: > > http://blog.quarkslab.com/you-like-python-security-challenge-and-traveling-win-a-free-ticket-to-hitb-kul.html > > There are a few free tickets to the HITB conference[1] to win, so > unleash the hacker in you! > > enjoy, > > [0] I am indeed an employee of QuarksLab :-/ > [1] https://conference.hitb.org/hitbsecconf2014kul > - -- > https://mail.python.org/mailman/listinfo/python-announce-list > > Support the Python Software Foundation: > http://www.python.org/psf/donations/ > > ------- End of Forwarded Message > > 1. s/emphasise/emphasis/ > 2. hmmm. so are we the real python now in the eyes of the world? python > and CPython, they say ... > 3. Anybody want to visit Kuala Lampur? > > Just wanted to mention it, > Laura > 1. s/Lampur/Lumpur/ 2. http://en.wikipedia.org/wiki/Kuala_Lumpur 3. http://en.wikipedia.org/wiki/Muphry%27s_law ;-) From songofacandy at gmail.com Fri Sep 5 02:19:24 2014 From: songofacandy at gmail.com (INADA Naoki) Date: Fri, 5 Sep 2014 09:19:24 +0900 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: http://www.techempower.com/benchmarks/#section=data-r9&hw=i7&test=query&f=0-g-0-0 Motor on PyPy is fast. On Fri, Sep 5, 2014 at 5:55 AM, Phyo Arkar wrote: > Thanks alot Ben, > Ok , as PyPy is choice of quora and they also use tornado , i might keep > testing on larget projects. > How about mongodb performance on PyPy , i heard its slower due to no C > Extension (no CFFI) for pypy. > Your suggestion will be very appreciated. > > > On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit > wrote: >> >> I use Tornado and have found PyPy can yield a 30-50% performance increase >> for a moderately complex project. >> >> Ben >> >> From: Phyo Arkar >> Date: Wednesday, September 3, 2014 at 8:21 AM >> To: pypy-dev >> Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. >> >> I expect pypy to be faster in those cases but select io is not cpu >> intensive thing to do so no real benefit using pypy here i guess. >> On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar >> wrote: >> > >> > It just return json document with a few thousand characters (1053 bytes) >> > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> > >> > Python 2.7.7: >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14478 hits >> > Availability: 100.00 % >> > Elapsed time: 19.10 secs >> > Data transferred: 14.54 MB >> > Response time: 0.01 secs >> > Transaction rate: 758.01 trans/sec >> > Throughput: 0.76 MB/sec >> > Concurrency: 8.91 >> > Successful transactions: 14478 >> > Failed transactions: 0 >> > Longest transaction: 1.08 seconds >> > Shortest transaction: 0.00 >> > >> > pypy-2.3.1 stable: >> > >> > Transactions: 15149 hits >> > Availability: 100.00 % >> > Elapsed time: 19.63 secs >> > Data transferred: 15.21 MB >> > Response time: 0.02 secs >> > Transaction rate: 771.73 trans/sec >> > Throughput: 0.77 MB/sec >> > Concurrency: 11.92 >> > Successful transactions: 15149 >> > Failed transactions: 0 >> > Longest transaction: 1.09 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14361 hits >> > Availability: 100.00 % >> > Elapsed time: 19.13 secs >> > Data transferred: 14.42 MB >> > Response time: 0.03 secs >> > Transaction rate: 750.71 trans/sec >> > Throughput: 0.75 MB/sec >> > Concurrency: 21.53 >> > Successful transactions: 14361 >> > Failed transactions: 0 >> > Longest transaction: 3.03 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > >> > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> > those requests (in Cpython) are only ~0.001 to 0.002 sec. >> > >> >> It just return json document with a few thousand characters (1053 bytes) >> $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> >> Python 2.7.7: >> >> Lifting the server siege... done. >> >> Transactions: 14478 hits >> Availability: 100.00 % >> Elapsed time: 19.10 secs >> Data transferred: 14.54 MB >> Response time: 0.01 secs >> Transaction rate: 758.01 trans/sec >> Throughput: 0.76 MB/sec >> Concurrency: 8.91 >> Successful transactions: 14478 >> Failed transactions: 0 >> Longest transaction: 1.08 seconds >> Shortest transaction: 0.00 >> >> pypy-2.3.1 stable: >> >> Transactions: 15149 hits >> Availability: 100.00 % >> Elapsed time: 19.63 secs >> Data transferred: 15.21 MB >> Response time: 0.02 secs >> Transaction rate: 771.73 trans/sec >> Throughput: 0.77 MB/sec >> Concurrency: 11.92 >> Successful transactions: 15149 >> Failed transactions: 0 >> Longest transaction: 1.09 seconds >> Shortest transaction: 0.00 >> >> >> >> pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> >> Lifting the server siege... done. >> >> Transactions: 14361 hits >> Availability: 100.00 % >> Elapsed time: 19.13 secs >> Data transferred: 14.42 MB >> Response time: 0.03 secs >> Transaction rate: 750.71 trans/sec >> Throughput: 0.75 MB/sec >> Concurrency: 21.53 >> Successful transactions: 14361 >> Failed transactions: 0 >> Longest transaction: 3.03 seconds >> Shortest transaction: 0.00 >> >> >> >> >> Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> those requests (in Cpython) are only ~0.001 to 0.002 sec. >> >> >> *************************************************************************** >> The information contained in this communication is confidential, is >> intended only for the use of the recipient named above, and may be legally >> privileged. >> >> If the reader of this message is not the intended recipient, you are >> hereby notified that any dissemination, distribution or copying of this >> communication is strictly prohibited. >> >> If you have received this communication in error, please resend this >> communication to the sender and delete the original message or any copy >> of it from your computer system. >> >> Thank You. >> >> **************************************************************************** >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- INADA Naoki From Ben.Jolitz at acxiom.com Fri Sep 5 18:47:22 2014 From: Ben.Jolitz at acxiom.com (Jolitz Ben - bjolit) Date: Fri, 5 Sep 2014 16:47:22 +0000 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: I don?t have specific suggestions on Mongo, but I can share what I?ve learned in a few months of using PyPy and Tornado. You want to make use of CFFI in PyPy to accelerate operations that would usually be slow in Python, namely encryption and database drivers. But always test first to see if you really need to go to C. /Any/ CPython C-Extensions will torpedo performance. Anything blocking (like the majority of DB drivers) will similarly destroy Tornado performance. Code that is overly dynamic also does lousy on PyPy. If your driver has a ton of paths or makes idiot use of threading.Lock, expect to have an uphill struggle in optimization. When in doubt, ask yourself if the algorithm is appropriate. If you can?t make the Python driver performant and there exists a C-API for it, then it is trivial to wrap it with CFFI. If it doesn?t support nonblocking operations, you can find alternatives. For example in MySQLdb, others have found you can add the Connection._fd to the IOLoop and use it to do a send_query, read_query. If you still can?t find an alternative to a blocking call, you can still mimic nonblocking IO by using pthreads, a work queue and a callback pthread. It?s not perfect and theres a lot you can optimize, but it can easily allow you to delegate long running operations to C. Another thing I learned was to avoid generating Python-side c callback pointers frequently. If C is going to callback into Python with your result to a unique Request, you?re going to need to tag it appropriately. I prefer to use dictionaries and the callback attribute handed to me by gen.Task and pass a unique-enough key to C to callback in with. As with everything in optimization, profile your code first. If you?re losing speed heavily somewhere else, then the above will only serve to distract you. Cheers, Ben From: Phyo Arkar > Date: Thursday, September 4, 2014 at 1:55 PM To: Ben Jolitz > Cc: pypy-dev > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. Thanks alot Ben, Ok , as PyPy is choice of quora and they also use tornado , i might keep testing on larget projects. How about mongodb performance on PyPy , i heard its slower due to no C Extension (no CFFI) for pypy. Your suggestion will be very appreciated. On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit > wrote: I use Tornado and have found PyPy can yield a 30-50% performance increase for a moderately complex project. Ben From: Phyo Arkar > Date: Wednesday, September 3, 2014 at 8:21 AM To: pypy-dev > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. I expect pypy to be faster in those cases but select io is not cpu intensive thing to do so no real benefit using pypy here i guess. On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar > wrote: > > It just return json document with a few thousand characters (1053 bytes) > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js > > Python 2.7.7: > > Lifting the server siege... done. > > Transactions: 14478 hits > Availability: 100.00 % > Elapsed time: 19.10 secs > Data transferred: 14.54 MB > Response time: 0.01 secs > Transaction rate: 758.01 trans/sec > Throughput: 0.76 MB/sec > Concurrency: 8.91 > Successful transactions: 14478 > Failed transactions: 0 > Longest transaction: 1.08 seconds > Shortest transaction: 0.00 > > pypy-2.3.1 stable: > > Transactions: 15149 hits > Availability: 100.00 % > Elapsed time: 19.63 secs > Data transferred: 15.21 MB > Response time: 0.02 secs > Transaction rate: 771.73 trans/sec > Throughput: 0.77 MB/sec > Concurrency: 11.92 > Successful transactions: 15149 > Failed transactions: 0 > Longest transaction: 1.09 seconds > Shortest transaction: 0.00 > > > > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) > > Lifting the server siege... done. > > Transactions: 14361 hits > Availability: 100.00 % > Elapsed time: 19.13 secs > Data transferred: 14.42 MB > Response time: 0.03 secs > Transaction rate: 750.71 trans/sec > Throughput: 0.75 MB/sec > Concurrency: 21.53 > Successful transactions: 14361 > Failed transactions: 0 > Longest transaction: 3.03 seconds > Shortest transaction: 0.00 > > > > > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. > It just return json document with a few thousand characters (1053 bytes) $siege -c 400 -t 20s -r 2000 http://localhost:9999/js Python 2.7.7: Lifting the server siege... done. Transactions: 14478 hits Availability: 100.00 % Elapsed time: 19.10 secs Data transferred: 14.54 MB Response time: 0.01 secs Transaction rate: 758.01 trans/sec Throughput: 0.76 MB/sec Concurrency: 8.91 Successful transactions: 14478 Failed transactions: 0 Longest transaction: 1.08 seconds Shortest transaction: 0.00 pypy-2.3.1 stable: Transactions: 15149 hits Availability: 100.00 % Elapsed time: 19.63 secs Data transferred: 15.21 MB Response time: 0.02 secs Transaction rate: 771.73 trans/sec Throughput: 0.77 MB/sec Concurrency: 11.92 Successful transactions: 15149 Failed transactions: 0 Longest transaction: 1.09 seconds Shortest transaction: 0.00 pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) Lifting the server siege... done. Transactions: 14361 hits Availability: 100.00 % Elapsed time: 19.13 secs Data transferred: 14.42 MB Response time: 0.03 secs Transaction rate: 750.71 trans/sec Throughput: 0.75 MB/sec Concurrency: 21.53 Successful transactions: 14361 Failed transactions: 0 Longest transaction: 3.03 seconds Shortest transaction: 0.00 Pypy Nightly have some request Randomly get to 3.0 Seconds , normally those requests (in Cpython) are only ~0.001 to 0.002 sec. *************************************************************************** The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank You. **************************************************************************** _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Sep 5 19:01:59 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 5 Sep 2014 11:01:59 -0600 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: Hi Can you please put it all in a blog post (ideally with examples), it's a lot of useful info. I'm willing to help On Fri, Sep 5, 2014 at 10:47 AM, Jolitz Ben - bjolit wrote: > I don?t have specific suggestions on Mongo, but I can share what I?ve > learned in a few months of using PyPy and Tornado. > > You want to make use of CFFI in PyPy to accelerate operations that would > usually be slow in Python, namely encryption and database drivers. But > always test first to see if you really need to go to C. > > /Any/ CPython C-Extensions will torpedo performance. Anything blocking (like > the majority of DB drivers) will similarly destroy Tornado performance. > > Code that is overly dynamic also does lousy on PyPy. If your driver has a > ton of paths or makes idiot use of threading.Lock, expect to have an uphill > struggle in optimization. > > When in doubt, ask yourself if the algorithm is appropriate. > > If you can?t make the Python driver performant and there exists a C-API for > it, then it is trivial to wrap it with CFFI. > > If it doesn?t support nonblocking operations, you can find alternatives. For > example in MySQLdb, others have found you can add the Connection._fd to the > IOLoop and use it to do a send_query, read_query. If you still can?t find an > alternative to a blocking call, you can still mimic nonblocking IO by using > pthreads, a work queue and a callback pthread. It?s not perfect and theres a > lot you can optimize, but it can easily allow you to delegate long running > operations to C. > > Another thing I learned was to avoid generating Python-side c callback > pointers frequently. If C is going to callback into Python with your result > to a unique Request, you?re going to need to tag it appropriately. I prefer > to use dictionaries and the callback attribute handed to me by gen.Task and > pass a unique-enough key to C to callback in with. > > As with everything in optimization, profile your code first. If you?re > losing speed heavily somewhere else, then the above will only serve to > distract you. > > Cheers, > > Ben > > > From: Phyo Arkar > Date: Thursday, September 4, 2014 at 1:55 PM > To: Ben Jolitz > Cc: pypy-dev > > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. > > Thanks alot Ben, > Ok , as PyPy is choice of quora and they also use tornado , i might keep > testing on larget projects. > How about mongodb performance on PyPy , i heard its slower due to no C > Extension (no CFFI) for pypy. > Your suggestion will be very appreciated. > > > On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit > wrote: >> >> I use Tornado and have found PyPy can yield a 30-50% performance increase >> for a moderately complex project. >> >> Ben >> >> From: Phyo Arkar >> Date: Wednesday, September 3, 2014 at 8:21 AM >> To: pypy-dev >> Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. >> >> I expect pypy to be faster in those cases but select io is not cpu >> intensive thing to do so no real benefit using pypy here i guess. >> On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar >> wrote: >> > >> > It just return json document with a few thousand characters (1053 bytes) >> > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> > >> > Python 2.7.7: >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14478 hits >> > Availability: 100.00 % >> > Elapsed time: 19.10 secs >> > Data transferred: 14.54 MB >> > Response time: 0.01 secs >> > Transaction rate: 758.01 trans/sec >> > Throughput: 0.76 MB/sec >> > Concurrency: 8.91 >> > Successful transactions: 14478 >> > Failed transactions: 0 >> > Longest transaction: 1.08 seconds >> > Shortest transaction: 0.00 >> > >> > pypy-2.3.1 stable: >> > >> > Transactions: 15149 hits >> > Availability: 100.00 % >> > Elapsed time: 19.63 secs >> > Data transferred: 15.21 MB >> > Response time: 0.02 secs >> > Transaction rate: 771.73 trans/sec >> > Throughput: 0.77 MB/sec >> > Concurrency: 11.92 >> > Successful transactions: 15149 >> > Failed transactions: 0 >> > Longest transaction: 1.09 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14361 hits >> > Availability: 100.00 % >> > Elapsed time: 19.13 secs >> > Data transferred: 14.42 MB >> > Response time: 0.03 secs >> > Transaction rate: 750.71 trans/sec >> > Throughput: 0.75 MB/sec >> > Concurrency: 21.53 >> > Successful transactions: 14361 >> > Failed transactions: 0 >> > Longest transaction: 3.03 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > >> > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> > those requests (in Cpython) are only ~0.001 to 0.002 sec. >> > >> >> It just return json document with a few thousand characters (1053 bytes) >> $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> >> Python 2.7.7: >> >> Lifting the server siege... done. >> >> Transactions: 14478 hits >> Availability: 100.00 % >> Elapsed time: 19.10 secs >> Data transferred: 14.54 MB >> Response time: 0.01 secs >> Transaction rate: 758.01 trans/sec >> Throughput: 0.76 MB/sec >> Concurrency: 8.91 >> Successful transactions: 14478 >> Failed transactions: 0 >> Longest transaction: 1.08 seconds >> Shortest transaction: 0.00 >> >> pypy-2.3.1 stable: >> >> Transactions: 15149 hits >> Availability: 100.00 % >> Elapsed time: 19.63 secs >> Data transferred: 15.21 MB >> Response time: 0.02 secs >> Transaction rate: 771.73 trans/sec >> Throughput: 0.77 MB/sec >> Concurrency: 11.92 >> Successful transactions: 15149 >> Failed transactions: 0 >> Longest transaction: 1.09 seconds >> Shortest transaction: 0.00 >> >> >> >> pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> >> Lifting the server siege... done. >> >> Transactions: 14361 hits >> Availability: 100.00 % >> Elapsed time: 19.13 secs >> Data transferred: 14.42 MB >> Response time: 0.03 secs >> Transaction rate: 750.71 trans/sec >> Throughput: 0.75 MB/sec >> Concurrency: 21.53 >> Successful transactions: 14361 >> Failed transactions: 0 >> Longest transaction: 3.03 seconds >> Shortest transaction: 0.00 >> >> >> >> >> Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> those requests (in Cpython) are only ~0.001 to 0.002 sec. >> >> >> *************************************************************************** >> The information contained in this communication is confidential, is >> intended only for the use of the recipient named above, and may be legally >> privileged. >> >> If the reader of this message is not the intended recipient, you are >> hereby notified that any dissemination, distribution or copying of this >> communication is strictly prohibited. >> >> If you have received this communication in error, please resend this >> communication to the sender and delete the original message or any copy >> of it from your computer system. >> >> Thank You. >> >> **************************************************************************** >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From phyo.arkarlwin at gmail.com Fri Sep 5 21:55:21 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sat, 6 Sep 2014 02:25:21 +0630 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: Jolitz Thanks a lot for your advices. There is Motor driver by MongoDB (Non-Blocking) for tornado , but it built on top of PyMongo (which have C Extension for speedup and have also pure python mode) . In case of pypy , PyMongo disables C Extensions . I was in FUD in case of motor + pypy due to this : http://blog.kgriffs.com/2012/12/12/gevent-vs-tornado-benchmarks.html. But looking at : http://www.techempower.com/benchmarks/#section=data-r9&hw=i7&test=query&f=0-g-0-0 I am convinced , i guess there is no need no need for CFFI Driver. pypy+motor+tornado is indeed 2x faster in real world multiple quries , vs normal tornado. 4 times faster in other "Pissing Contest" cases :) @Ben and Jeese , i have CCed to you , i believe you will interest in this too. !!Pypy + Motor had beat the performance of Nodejs+Mongodb in real world use cases (multi query) nodejs-mongodb-raw 4,430 tornado-pypy 3,244 nodejs-mongodb 3,229 http://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=json&l=1s0&d=9 On Fri, Sep 5, 2014 at 11:17 PM, Jolitz Ben - bjolit wrote: > I don?t have specific suggestions on Mongo, but I can share what I?ve > learned in a few months of using PyPy and Tornado. > > You want to make use of CFFI in PyPy to accelerate operations that would > usually be slow in Python, namely encryption and database drivers. But > always test first to see if you really need to go to C. > > /Any/ CPython C-Extensions will torpedo performance. Anything blocking (like > the majority of DB drivers) will similarly destroy Tornado performance. > > Code that is overly dynamic also does lousy on PyPy. If your driver has a > ton of paths or makes idiot use of threading.Lock, expect to have an uphill > struggle in optimization. > > When in doubt, ask yourself if the algorithm is appropriate. > > If you can?t make the Python driver performant and there exists a C-API for > it, then it is trivial to wrap it with CFFI. > > If it doesn?t support nonblocking operations, you can find alternatives. For > example in MySQLdb, others have found you can add the Connection._fd to the > IOLoop and use it to do a send_query, read_query. If you still can?t find an > alternative to a blocking call, you can still mimic nonblocking IO by using > pthreads, a work queue and a callback pthread. It?s not perfect and theres a > lot you can optimize, but it can easily allow you to delegate long running > operations to C. > > Another thing I learned was to avoid generating Python-side c callback > pointers frequently. If C is going to callback into Python with your result > to a unique Request, you?re going to need to tag it appropriately. I prefer > to use dictionaries and the callback attribute handed to me by gen.Task and > pass a unique-enough key to C to callback in with. > > As with everything in optimization, profile your code first. If you?re > losing speed heavily somewhere else, then the above will only serve to > distract you. > > Cheers, > > Ben > > > From: Phyo Arkar > Date: Thursday, September 4, 2014 at 1:55 PM > To: Ben Jolitz > Cc: pypy-dev > > Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. > > Thanks alot Ben, > Ok , as PyPy is choice of quora and they also use tornado , i might keep > testing on larget projects. > How about mongodb performance on PyPy , i heard its slower due to no C > Extension (no CFFI) for pypy. > Your suggestion will be very appreciated. > > > On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit > wrote: >> >> I use Tornado and have found PyPy can yield a 30-50% performance increase >> for a moderately complex project. >> >> Ben >> >> From: Phyo Arkar >> Date: Wednesday, September 3, 2014 at 8:21 AM >> To: pypy-dev >> Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. >> >> I expect pypy to be faster in those cases but select io is not cpu >> intensive thing to do so no real benefit using pypy here i guess. >> On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar >> wrote: >> > >> > It just return json document with a few thousand characters (1053 bytes) >> > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> > >> > Python 2.7.7: >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14478 hits >> > Availability: 100.00 % >> > Elapsed time: 19.10 secs >> > Data transferred: 14.54 MB >> > Response time: 0.01 secs >> > Transaction rate: 758.01 trans/sec >> > Throughput: 0.76 MB/sec >> > Concurrency: 8.91 >> > Successful transactions: 14478 >> > Failed transactions: 0 >> > Longest transaction: 1.08 seconds >> > Shortest transaction: 0.00 >> > >> > pypy-2.3.1 stable: >> > >> > Transactions: 15149 hits >> > Availability: 100.00 % >> > Elapsed time: 19.63 secs >> > Data transferred: 15.21 MB >> > Response time: 0.02 secs >> > Transaction rate: 771.73 trans/sec >> > Throughput: 0.77 MB/sec >> > Concurrency: 11.92 >> > Successful transactions: 15149 >> > Failed transactions: 0 >> > Longest transaction: 1.09 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> > >> > Lifting the server siege... done. >> > >> > Transactions: 14361 hits >> > Availability: 100.00 % >> > Elapsed time: 19.13 secs >> > Data transferred: 14.42 MB >> > Response time: 0.03 secs >> > Transaction rate: 750.71 trans/sec >> > Throughput: 0.75 MB/sec >> > Concurrency: 21.53 >> > Successful transactions: 14361 >> > Failed transactions: 0 >> > Longest transaction: 3.03 seconds >> > Shortest transaction: 0.00 >> > >> > >> > >> > >> > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> > those requests (in Cpython) are only ~0.001 to 0.002 sec. >> > >> >> It just return json document with a few thousand characters (1053 bytes) >> $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >> >> Python 2.7.7: >> >> Lifting the server siege... done. >> >> Transactions: 14478 hits >> Availability: 100.00 % >> Elapsed time: 19.10 secs >> Data transferred: 14.54 MB >> Response time: 0.01 secs >> Transaction rate: 758.01 trans/sec >> Throughput: 0.76 MB/sec >> Concurrency: 8.91 >> Successful transactions: 14478 >> Failed transactions: 0 >> Longest transaction: 1.08 seconds >> Shortest transaction: 0.00 >> >> pypy-2.3.1 stable: >> >> Transactions: 15149 hits >> Availability: 100.00 % >> Elapsed time: 19.63 secs >> Data transferred: 15.21 MB >> Response time: 0.02 secs >> Transaction rate: 771.73 trans/sec >> Throughput: 0.77 MB/sec >> Concurrency: 11.92 >> Successful transactions: 15149 >> Failed transactions: 0 >> Longest transaction: 1.09 seconds >> Shortest transaction: 0.00 >> >> >> >> pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >> >> Lifting the server siege... done. >> >> Transactions: 14361 hits >> Availability: 100.00 % >> Elapsed time: 19.13 secs >> Data transferred: 14.42 MB >> Response time: 0.03 secs >> Transaction rate: 750.71 trans/sec >> Throughput: 0.75 MB/sec >> Concurrency: 21.53 >> Successful transactions: 14361 >> Failed transactions: 0 >> Longest transaction: 3.03 seconds >> Shortest transaction: 0.00 >> >> >> >> >> Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >> those requests (in Cpython) are only ~0.001 to 0.002 sec. >> >> >> *************************************************************************** >> The information contained in this communication is confidential, is >> intended only for the use of the recipient named above, and may be legally >> privileged. >> >> If the reader of this message is not the intended recipient, you are >> hereby notified that any dissemination, distribution or copying of this >> communication is strictly prohibited. >> >> If you have received this communication in error, please resend this >> communication to the sender and delete the original message or any copy >> of it from your computer system. >> >> Thank You. >> >> **************************************************************************** >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > From phyo.arkarlwin at gmail.com Fri Sep 5 21:57:23 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sat, 6 Sep 2014 02:27:23 +0630 Subject: [pypy-dev] Pypy Benchmark of Tornado. In-Reply-To: References: Message-ID: Yes This really deserve a blog post . I haven't finished writing my own blog yet tho :D. On Fri, Sep 5, 2014 at 11:31 PM, Maciej Fijalkowski wrote: > Hi > > Can you please put it all in a blog post (ideally with examples), it's > a lot of useful info. > > I'm willing to help > > On Fri, Sep 5, 2014 at 10:47 AM, Jolitz Ben - bjolit > wrote: >> I don?t have specific suggestions on Mongo, but I can share what I?ve >> learned in a few months of using PyPy and Tornado. >> >> You want to make use of CFFI in PyPy to accelerate operations that would >> usually be slow in Python, namely encryption and database drivers. But >> always test first to see if you really need to go to C. >> >> /Any/ CPython C-Extensions will torpedo performance. Anything blocking (like >> the majority of DB drivers) will similarly destroy Tornado performance. >> >> Code that is overly dynamic also does lousy on PyPy. If your driver has a >> ton of paths or makes idiot use of threading.Lock, expect to have an uphill >> struggle in optimization. >> >> When in doubt, ask yourself if the algorithm is appropriate. >> >> If you can?t make the Python driver performant and there exists a C-API for >> it, then it is trivial to wrap it with CFFI. >> >> If it doesn?t support nonblocking operations, you can find alternatives. For >> example in MySQLdb, others have found you can add the Connection._fd to the >> IOLoop and use it to do a send_query, read_query. If you still can?t find an >> alternative to a blocking call, you can still mimic nonblocking IO by using >> pthreads, a work queue and a callback pthread. It?s not perfect and theres a >> lot you can optimize, but it can easily allow you to delegate long running >> operations to C. >> >> Another thing I learned was to avoid generating Python-side c callback >> pointers frequently. If C is going to callback into Python with your result >> to a unique Request, you?re going to need to tag it appropriately. I prefer >> to use dictionaries and the callback attribute handed to me by gen.Task and >> pass a unique-enough key to C to callback in with. >> >> As with everything in optimization, profile your code first. If you?re >> losing speed heavily somewhere else, then the above will only serve to >> distract you. >> >> Cheers, >> >> Ben >> >> >> From: Phyo Arkar >> Date: Thursday, September 4, 2014 at 1:55 PM >> To: Ben Jolitz >> Cc: pypy-dev >> >> Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. >> >> Thanks alot Ben, >> Ok , as PyPy is choice of quora and they also use tornado , i might keep >> testing on larget projects. >> How about mongodb performance on PyPy , i heard its slower due to no C >> Extension (no CFFI) for pypy. >> Your suggestion will be very appreciated. >> >> >> On Thu, Sep 4, 2014 at 4:56 AM, Jolitz Ben - bjolit >> wrote: >>> >>> I use Tornado and have found PyPy can yield a 30-50% performance increase >>> for a moderately complex project. >>> >>> Ben >>> >>> From: Phyo Arkar >>> Date: Wednesday, September 3, 2014 at 8:21 AM >>> To: pypy-dev >>> Subject: Re: [pypy-dev] Pypy Benchmark of Tornado. >>> >>> I expect pypy to be faster in those cases but select io is not cpu >>> intensive thing to do so no real benefit using pypy here i guess. >>> On Wed, Sep 3, 2014 at 4:43 PM, Phyo Arkar >>> wrote: >>> > >>> > It just return json document with a few thousand characters (1053 bytes) >>> > $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >>> > >>> > Python 2.7.7: >>> > >>> > Lifting the server siege... done. >>> > >>> > Transactions: 14478 hits >>> > Availability: 100.00 % >>> > Elapsed time: 19.10 secs >>> > Data transferred: 14.54 MB >>> > Response time: 0.01 secs >>> > Transaction rate: 758.01 trans/sec >>> > Throughput: 0.76 MB/sec >>> > Concurrency: 8.91 >>> > Successful transactions: 14478 >>> > Failed transactions: 0 >>> > Longest transaction: 1.08 seconds >>> > Shortest transaction: 0.00 >>> > >>> > pypy-2.3.1 stable: >>> > >>> > Transactions: 15149 hits >>> > Availability: 100.00 % >>> > Elapsed time: 19.63 secs >>> > Data transferred: 15.21 MB >>> > Response time: 0.02 secs >>> > Transaction rate: 771.73 trans/sec >>> > Throughput: 0.77 MB/sec >>> > Concurrency: 11.92 >>> > Successful transactions: 15149 >>> > Failed transactions: 0 >>> > Longest transaction: 1.09 seconds >>> > Shortest transaction: 0.00 >>> > >>> > >>> > >>> > pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >>> > >>> > Lifting the server siege... done. >>> > >>> > Transactions: 14361 hits >>> > Availability: 100.00 % >>> > Elapsed time: 19.13 secs >>> > Data transferred: 14.42 MB >>> > Response time: 0.03 secs >>> > Transaction rate: 750.71 trans/sec >>> > Throughput: 0.75 MB/sec >>> > Concurrency: 21.53 >>> > Successful transactions: 14361 >>> > Failed transactions: 0 >>> > Longest transaction: 3.03 seconds >>> > Shortest transaction: 0.00 >>> > >>> > >>> > >>> > >>> > Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >>> > those requests (in Cpython) are only ~0.001 to 0.002 sec. >>> > >>> >>> It just return json document with a few thousand characters (1053 bytes) >>> $siege -c 400 -t 20s -r 2000 http://localhost:9999/js >>> >>> Python 2.7.7: >>> >>> Lifting the server siege... done. >>> >>> Transactions: 14478 hits >>> Availability: 100.00 % >>> Elapsed time: 19.10 secs >>> Data transferred: 14.54 MB >>> Response time: 0.01 secs >>> Transaction rate: 758.01 trans/sec >>> Throughput: 0.76 MB/sec >>> Concurrency: 8.91 >>> Successful transactions: 14478 >>> Failed transactions: 0 >>> Longest transaction: 1.08 seconds >>> Shortest transaction: 0.00 >>> >>> pypy-2.3.1 stable: >>> >>> Transactions: 15149 hits >>> Availability: 100.00 % >>> Elapsed time: 19.63 secs >>> Data transferred: 15.21 MB >>> Response time: 0.02 secs >>> Transaction rate: 771.73 trans/sec >>> Throughput: 0.77 MB/sec >>> Concurrency: 11.92 >>> Successful transactions: 15149 >>> Failed transactions: 0 >>> Longest transaction: 1.09 seconds >>> Shortest transaction: 0.00 >>> >>> >>> >>> pypy--c-jit-73283-912dd9df99a8-linux64 (latest nightly build) >>> >>> Lifting the server siege... done. >>> >>> Transactions: 14361 hits >>> Availability: 100.00 % >>> Elapsed time: 19.13 secs >>> Data transferred: 14.42 MB >>> Response time: 0.03 secs >>> Transaction rate: 750.71 trans/sec >>> Throughput: 0.75 MB/sec >>> Concurrency: 21.53 >>> Successful transactions: 14361 >>> Failed transactions: 0 >>> Longest transaction: 3.03 seconds >>> Shortest transaction: 0.00 >>> >>> >>> >>> >>> Pypy Nightly have some request Randomly get to 3.0 Seconds , normally >>> those requests (in Cpython) are only ~0.001 to 0.002 sec. >>> >>> >>> *************************************************************************** >>> The information contained in this communication is confidential, is >>> intended only for the use of the recipient named above, and may be legally >>> privileged. >>> >>> If the reader of this message is not the intended recipient, you are >>> hereby notified that any dissemination, distribution or copying of this >>> communication is strictly prohibited. >>> >>> If you have received this communication in error, please resend this >>> communication to the sender and delete the original message or any copy >>> of it from your computer system. >>> >>> Thank You. >>> >>> **************************************************************************** >>> >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> From phyo.arkarlwin at gmail.com Sat Sep 6 22:55:10 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 7 Sep 2014 03:25:10 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) Message-ID: I am wondering, is it theoretically possible to reach performance of HHVM? >From benchmark game , it is the highest performance of all the dynamic languages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Sep 6 22:59:43 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 6 Sep 2014 14:59:43 -0600 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: bah. Did you notice that pypy is not even in the benchmark game? On Sat, Sep 6, 2014 at 2:55 PM, Phyo Arkar wrote: > I am wondering, is it theoretically possible to reach performance of HHVM? > From benchmark game , it is the highest performance of all the dynamic > languages. > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From phyo.arkarlwin at gmail.com Sat Sep 6 23:01:11 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 7 Sep 2014 03:31:11 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: http://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=query Sorry , this benchmark game i meant :D On Sun, Sep 7, 2014 at 3:29 AM, Maciej Fijalkowski wrote: > bah. > > Did you notice that pypy is not even in the benchmark game? > > On Sat, Sep 6, 2014 at 2:55 PM, Phyo Arkar > wrote: > > I am wondering, is it theoretically possible to reach performance of > HHVM? > > From benchmark game , it is the highest performance of all the dynamic > > languages. > > > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzat at gmail.com Sat Sep 6 23:02:57 2014 From: wizzat at gmail.com (Mark Roberts) Date: Sat, 6 Sep 2014 14:02:57 -0700 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: The benchmarks game shows Hack being 190x slower than Python3 at calculating pidigits. The benchmarks for Hack are all terrible. What makes you say it is the fastest dynamic language in the game? -Mark On Sat, Sep 6, 2014 at 1:55 PM, Phyo Arkar wrote: > I am wondering, is it theoretically possible to reach performance of HHVM? > From benchmark game , it is the highest performance of all the dynamic > languages. > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Sep 6 23:05:46 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 6 Sep 2014 15:05:46 -0600 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: I have no idea what they're doing there, which version of pypy they're using etc. They're also comparing apples and oranges (a random choice of web framework?) for example a uwsgi+pypy outperforms tornado + pypy by quite a large margin (I hit client limits when trying), but it implements less. How much hhvm implements? Given how php works "not that much", but who knows. Coming back to the question - depending on *what* you do, pypy and hhvm outperform each other. This particular test is quite terrible (also a simple data point too), so your question is really absolutely a wrong one to ask. On Sat, Sep 6, 2014 at 3:01 PM, Phyo Arkar wrote: > http://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=query > > Sorry , this benchmark game i meant :D > > > On Sun, Sep 7, 2014 at 3:29 AM, Maciej Fijalkowski wrote: >> >> bah. >> >> Did you notice that pypy is not even in the benchmark game? >> >> On Sat, Sep 6, 2014 at 2:55 PM, Phyo Arkar >> wrote: >> > I am wondering, is it theoretically possible to reach performance of >> > HHVM? >> > From benchmark game , it is the highest performance of all the dynamic >> > languages. >> > >> > >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev >> > > > From phyo.arkarlwin at gmail.com Sat Sep 6 23:06:15 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 7 Sep 2014 03:36:15 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: sorry guys , i forgot to mention i was talking about TechEmpower benchmarks. Not Debian shootout. I haven;t check there yet. But i think Techempower case is a bit more real world? but only covering web+database benchmarks tho. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Sat Sep 6 23:11:12 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 7 Sep 2014 03:41:12 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: >uwsgi+pypy outperforms tornado + pypy by quite a large margin (I hit client limits when trying) Very interesting , i got to try uwsgi+pypy-tornado combo. On Sun, Sep 7, 2014 at 3:36 AM, Phyo Arkar wrote: > sorry guys , i forgot to mention i was talking about TechEmpower > benchmarks. Not Debian shootout. I haven;t check there yet. > But i think Techempower case is a bit more real world? but only covering > web+database benchmarks tho. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Sep 6 23:20:24 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 6 Sep 2014 15:20:24 -0600 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: On Sat, Sep 6, 2014 at 3:11 PM, Phyo Arkar wrote: >>uwsgi+pypy outperforms tornado + pypy by > quite a large margin (I hit client limits when trying) > > Very interesting , i got to try uwsgi+pypy-tornado combo. so here is a trick. tornado is a web framework that does tons of stuff. PHP does not do all this stuff. Why not check the raw wsgi comparison? and, while I'm at it, it does not really matter AT ALL. Raw HTTP comparison is nice and dandy, but you end up with 1000s of requests per sec (10s of 1000s sometimes), while real applications, like say wordpress, don't exceed 20/s on one CPU, so there is really no point. From phyo.arkarlwin at gmail.com Sat Sep 6 23:32:00 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 7 Sep 2014 04:02:00 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: Right, I just realize that. There using plain HHVM vs All other Frameworks , not even fair. Only thing that is not framework is Node.js there. On Sun, Sep 7, 2014 at 3:41 AM, Phyo Arkar wrote: >>uwsgi+pypy outperforms tornado + pypy by > quite a large margin (I hit client limits when trying) > > Very interesting , i got to try uwsgi+pypy-tornado combo. > > > On Sun, Sep 7, 2014 at 3:36 AM, Phyo Arkar wrote: >> >> sorry guys , i forgot to mention i was talking about TechEmpower >> benchmarks. Not Debian shootout. I haven;t check there yet. >> But i think Techempower case is a bit more real world? but only covering >> web+database benchmarks tho. > > From songofacandy at gmail.com Sun Sep 7 01:32:07 2014 From: songofacandy at gmail.com (INADA Naoki) Date: Sun, 7 Sep 2014 08:32:07 +0900 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: I'm contributor of Python benchmarks in the game. Please don't rely on performance on PEAK, especially multiple query test. Contributors use only EC2 environment. I can't know how many workers required for maximize performance on PEAK and i7. CPU bound tests (json, plaintext) should be CPU bound, and single query tests (query, fortune) may be CPU bound. But multi query test and data update test may not be CPU bound. We'are preparing round 10. You will be able to see HHVM vs PyPy in CPU bound tests. On Sun, Sep 7, 2014 at 6:32 AM, Phyo Arkar wrote: > Right, I just realize that. There using plain HHVM vs All other > Frameworks , not even fair. > Only thing that is not framework is Node.js there. > > On Sun, Sep 7, 2014 at 3:41 AM, Phyo Arkar wrote: >>>uwsgi+pypy outperforms tornado + pypy by >> quite a large margin (I hit client limits when trying) >> >> Very interesting , i got to try uwsgi+pypy-tornado combo. >> >> >> On Sun, Sep 7, 2014 at 3:36 AM, Phyo Arkar wrote: >>> >>> sorry guys , i forgot to mention i was talking about TechEmpower >>> benchmarks. Not Debian shootout. I haven;t check there yet. >>> But i think Techempower case is a bit more real world? but only covering >>> web+database benchmarks tho. >> >> > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev -- INADA Naoki From fijall at gmail.com Sun Sep 7 03:01:49 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 6 Sep 2014 19:01:49 -0600 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: On Sat, Sep 6, 2014 at 5:32 PM, INADA Naoki wrote: > I'm contributor of Python benchmarks in the game. > Please don't rely on performance on PEAK, especially multiple query test. > > Contributors use only EC2 environment. > I can't know how many workers required for maximize performance on PEAK and i7. > > CPU bound tests (json, plaintext) should be CPU bound, and single query tests > (query, fortune) may be CPU bound. > But multi query test and data update test may not be CPU bound. > > We'are preparing round 10. You will be able to see HHVM vs PyPy in > CPU bound tests. Hi Inada did you read my previous post how this is apples to oranges? You should setup some more barebone thing than tornado to "compare" From arigo at tunes.org Sun Sep 7 08:12:21 2014 From: arigo at tunes.org (Armin Rigo) Date: Sun, 7 Sep 2014 08:12:21 +0200 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: Hi all, On 7 September 2014 03:01, Maciej Fijalkowski wrote: > did you read my previous post how this is apples to oranges? You > should setup some more barebone thing than tornado to "compare" Maybe we should mention that PyPy's JIT technology, when applied straight to the Hippy VM, gives a PHP that is comparable in speed to HHVM. PyPy itself has seen Python-specific optimizations for a longer time than Hippy for PHP, however. Anyway, our point is that questions like "when will PyPy be as fast as HHVM" are loaded with the implicit assumption "on this benchmark set X". We might as well answer "PyPy has been much faster than today's HHVM since many years" since that's true on benchmark set Y. Actually, there are such benchmark sets (which we trust more because they try to compare apples to apples) showing PyPy to be in the same ballpark performance as V8. But feel free to trust whatever benchmark set you want, obviously. Personally I would ditch PHP in favour of a more reasonable language any day of the week. Then I'd check that the final performance of my rewritten app is still ok on . I bet it would be. A bient?t, Armin. From scott.gregory.west at gmail.com Sun Sep 7 13:42:07 2014 From: scott.gregory.west at gmail.com (Scott West) Date: Sun, 07 Sep 2014 13:42:07 +0200 Subject: [pypy-dev] PyPy for analysis? Message-ID: <540C448F.1050908@gmail.com> Hello all, I was looking recently into trying to do some simple static analysis of Python programs (to experiment with what is possible), and came across PyPy. Reading some of the documentation it seems that PyPy forms a control flow graph and does some abstract interpretation, which seems to indicate that it could be used for other forms of static analysis! I had a couple questions: 1) Would PyPy (seem) to be a good place to start for a general analysis framework? If so, would it be of general interest? 2) If one doesn't care about code generation, can the build time be reduced? I tried just using pyinteractive.py and that _seemed_ to work, though even that compiles a few things to start. 3) What is a good way to get into modifying the internals (adding analyses, skipping code gen.)? I have read the chapter of the Open Source Architecture book, and some of the documentation pages. I would be most grateful if anyone could provide any comments on these issues, or pointers to other similar works. Thanks! Regards, Scott From rymg19 at gmail.com Sun Sep 7 14:55:09 2014 From: rymg19 at gmail.com (Ryan) Date: Sun, 07 Sep 2014 07:55:09 -0500 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: <540C448F.1050908@gmail.com> References: <540C448F.1050908@gmail.com> Message-ID: You might find mypy(http://mypy-lang.org/) interesting. Scott West wrote: >Hello all, > >I was looking recently into trying to do some simple static analysis of > >Python programs (to experiment with what is possible), and came across >PyPy. Reading some of the documentation it seems that PyPy forms a >control flow graph and does some abstract interpretation, which seems >to >indicate that it could be used for other forms of static analysis! > >I had a couple questions: > 1) Would PyPy (seem) to be a good place to start for a general >analysis framework? If so, would it be of general interest? > 2) If one doesn't care about code generation, can the build time be >reduced? I tried just using pyinteractive.py and that _seemed_ to work, > >though even that compiles a few things to start. > 3) What is a good way to get into modifying the internals (adding >analyses, skipping code gen.)? I have read the chapter of the Open >Source Architecture book, and some of the documentation pages. > >I would be most grateful if anyone could provide any comments on these >issues, or pointers to other similar works. > >Thanks! > >Regards, >Scott >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >https://mail.python.org/mailman/listinfo/pypy-dev -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Sun Sep 7 17:35:08 2014 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 8 Sep 2014 01:35:08 +1000 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: References: <540C448F.1050908@gmail.com> Message-ID: Ugh, thanks Gmail (: On 8 September 2014 01:33, William ML Leslie wrote: > On 7 September 2014 21:42, Scott West > wrote: > >> Hello all, >> >> I was looking recently into trying to do some simple static analysis of >> Python programs (to experiment with what is possible), and came across >> PyPy. Reading some of the documentation it seems that PyPy forms a control >> flow graph and does some abstract interpretation, which seems to indicate >> that it could be used for other forms of static analysis! >> > > Static analysis of python programs is not difficult, weather you use ast > or code objects. The flow object space really only works on rpython > though, using it to analyse python code generally is not sensible. > > I do think pypy is a great platform for associating axioms with builtin > functions and bytecodes (and sometimes other weird stuff like exception > handling, depending on your analysis) which you would otherwise have to > write by hand.? If you want to do that, target the llgraph stage of the > compiler, as there is nothing 'outside' it that you need to rely upon. You > can possibly even generate the app-level analyser from the interpreter, but > in order to do this I had to rewrite the eval loop - I could never figure > out how to extract semantics from the frame object. > > I did at one point have something like a points-to analysis for > interp-level code working (a region polymorphic effects analysis), but it > has bitrotten beyond repair. I would still say pypy is a great place to > start, however. > > >> >> I had a couple questions: >> 1) Would PyPy (seem) to be a good place to start for a general analysis >> framework? If so, would it be of general interest? >> 2) If one doesn't care about code generation, can the build time be >> reduced? I tried just using pyinteractive.py and that _seemed_ to work, >> though even that compiles a few things to start. >> > > By about 40% if you target llgraph by my last count. You could avoid > building the jit, since it should not have any semantic impact.? > > > >> 3) What is a good way to get into modifying the internals (adding >> analyses, skipping code gen.)? I have read the chapter of the Open Source >> Architecture book, and some of the documentation pages. >> ?? >> >> > 0) ?Write lots of tests.? > > ? The existing tests make for good starting points.? > ?? > ?1) ? > ?Start with a very small interpreter, not the full pypy interpreter. IIRC > there was a language called TL which was for testing the JIT, use it as the > target until everything works.? > > 2) Dumping the llgraph to disk somewhere - perhaps into a logic or > relational database - will make little experiments easier. > > ?? > ?? > >> ?? >> ?? >> >> I w >> ?? >> ?? >> ould be most grateful if anyone could provide any comments on these >> issues, or pointers to other similar works. >> > > ?Unfortunately I have no idea what state my work was in when I left it, I > don't even have it on my current box; I would like to get back to it at > some point but Armin suggested I look at better low level languages to > target and I'm still working in that space.? > > -- > William Leslie > > Notice: > Likely much of this email is, by the nature of copyright, covered under > copyright law. You absolutely MAY reproduce any part of it in accordance > with the copyright law of the nation you are reading this in. Any attempt > to DENY YOU THOSE RIGHTS would be illegal without prior contractual > agreement. > -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan.lamy at gmail.com Sun Sep 7 18:06:48 2014 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Sun, 07 Sep 2014 17:06:48 +0100 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: <540C448F.1050908@gmail.com> References: <540C448F.1050908@gmail.com> Message-ID: <540C8298.5060008@gmail.com> Le 07/09/14 12:42, Scott West a ?crit : > Hello all, > > I was looking recently into trying to do some simple static analysis of > Python programs (to experiment with what is possible), and came across > PyPy. Reading some of the documentation it seems that PyPy forms a > control flow graph and does some abstract interpretation, which seems to > indicate that it could be used for other forms of static analysis! First, PyPy doesn't do static analysis, it's the RPython toolchain [http://rpython.readthedocs.org/en/improve-docs/getting-started.html] that does. > > I had a couple questions: > 1) Would PyPy (seem) to be a good place to start for a general > analysis framework? If so, would it be of general interest? Yes and no. The RPython toolchain does a lot of complex analysis, however it's meant to be used on RPython programs, not arbitrary Python. This means that it'll error out on valid Python code more often than not. > 2) If one doesn't care about code generation, can the build time be > reduced? I tried just using pyinteractive.py and that _seemed_ to work, > though even that compiles a few things to start. The annotation phase, where most of the analysis is done, takes only a fraction of the time of a full translation. You can run only that by calling 'rpython --translate' instead of 'rpython'. It checks that the program is RPython and generates all the type-inferred flow graphs but note that it doesn't yield any useful artefact. pyinteractive.py simply runs PyPy as a Python program on top of your standard interpreter. It's not useful for static analysis. For an interactive way of running the toolchain, use rpython.translator.interactive (you need to have pygame and graphviz installed). For instance: >>> from rpython.translator.interactive import Translation >>> def f(n): return n+1 >>> t = Translation(f, [int]) >>> t.annotate() >>> t.view() > 3) What is a good way to get into modifying the internals (adding > analyses, skipping code gen.)? I have read the chapter of the Open > Source Architecture book, and some of the documentation pages. The internals are poorly documented and have messy interdependencies. The AOSA chapter is somewhat outdated. OTOH the code is relatively well tested. So your best bet is to come ask questions on IRC (#pypy on freenode.net) and use test-driven development. > > I would be most grateful if anyone could provide any comments on these > issues, or pointers to other similar works. > > Thanks! > > Regards, > Scott > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From scott.gregory.west at gmail.com Sun Sep 7 20:25:21 2014 From: scott.gregory.west at gmail.com (Scott West) Date: Sun, 07 Sep 2014 20:25:21 +0200 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: References: <540C448F.1050908@gmail.com> Message-ID: <540CA311.30801@gmail.com> Hello William, Thanks for the comments. I see now (also from Ronan's reply) that it only handles RPython, so I guess ends the question right about there :). Basically I just wanted to see if I was missing some framework that handled transforming Python code into a CFG for the purposes of analysis. Otherwise I can just grab the bytecode or AST and do it myself, but I didn't want to replicate any work if I didn't have to :) Regards, Scott On 07/09/14 05:35 PM, William ML Leslie wrote: > Ugh, thanks Gmail (: > > > On 8 September 2014 01:33, William ML Leslie > > wrote: > > On 7 September 2014 21:42, Scott West > wrote: > > Hello all, > > I was looking recently into trying to do some simple static > analysis of Python programs (to experiment with what is > possible), and came across PyPy. Reading some of the > documentation it seems that PyPy forms a control flow graph and > does some abstract interpretation, which seems to indicate that > it could be used for other forms of static analysis! > > > Static analysis of python programs is not difficult, weather you use > ast or code objects. The flow object space really only works on > rpython though, using it to analyse python code generally is not > sensible. > > I do think pypy is a great platform for associating axioms with > builtin functions and bytecodes (and sometimes other weird stuff > like exception handling, depending on your analysis) which you would > otherwise have to write by hand.? If you want to do that, target > the llgraph stage of the compiler, as there is nothing 'outside' it > that you need to rely upon. You can possibly even generate the > app-level analyser from the interpreter, but in order to do this I > had to rewrite the eval loop - I could never figure out how to > extract semantics from the frame object. > > I did at one point have something like a points-to analysis for > interp-level code working (a region polymorphic effects analysis), > but it has bitrotten beyond repair. I would still say pypy is a > great place to start, however. > > > I had a couple questions: > 1) Would PyPy (seem) to be a good place to start for a > general analysis framework? If so, would it be of general interest? > 2) If one doesn't care about code generation, can the build > time be reduced? I tried just using pyinteractive.py and that > _seemed_ to work, though even that compiles a few things to start. > > > By about 40% if you target llgraph by my last count. You could > avoid building the jit, since it should not have any semantic impact.? > > 3) What is a good way to get into modifying the internals > (adding analyses, skipping code gen.)? I have read the chapter > of the Open Source Architecture book, and some of the > documentation pages. > ?? > > > 0) ?Write lots of tests.? > ? The existing tests make for good starting points.? > ?? > ?1) ? > ?Start with a very small interpreter, not the full pypy > interpreter. IIRC there was a language called TL which was for > testing the JIT, use it as the target until everything works.? > > 2) Dumping the llgraph to disk somewhere - perhaps into a logic or > relational database - will make little experiments easier. > ?? > ?? > > ?? > ?? > > I w > ?? > ?? > ould be most grateful if anyone could provide any comments on > these issues, or pointers to other similar works. > > > ?Unfortunately I have no idea what state my work was in when I left > it, I don't even have it on my current box; I would like to get back > to it at some point but Armin suggested I look at better low level > languages to target and I'm still working in that space.? > > -- > William Leslie > > Notice: > Likely much of this email is, by the nature of copyright, covered > under copyright law. You absolutely MAY reproduce any part of it in > accordance with the copyright law of the nation you are reading this > in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without > prior contractual agreement. > > > > > -- > William Leslie > > Notice: > Likely much of this email is, by the nature of copyright, covered under > copyright law. You absolutely MAY reproduce any part of it in > accordance with the copyright law of the nation you are reading this in. > Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior > contractual agreement. From yury at shurup.com Sun Sep 7 20:59:59 2014 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 07 Sep 2014 20:59:59 +0200 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: <540CA311.30801@gmail.com> References: <540C448F.1050908@gmail.com> <540CA311.30801@gmail.com> Message-ID: <1410116399.2642.6.camel@newpride> On Sun, 2014-09-07 at 20:25 +0200, Scott West wrote: > > Otherwise I can just grab the bytecode or AST and do it myself, but I > didn't want to replicate any work if I didn't have to :) I'm not exactly sure what kind of analysis you have in mind, but you might wish to have a look at pysonar, if you haven't seen it already: https://github.com/yinwang0/pysonar2 Maybe it could be a good starting point, although it's in Java... -- Sincerely yours, Yury V. Zaytsev From phyo.arkarlwin at gmail.com Sun Sep 7 21:25:49 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Mon, 8 Sep 2014 01:55:49 +0630 Subject: [pypy-dev] Will pypy reach performance of HHVM?(one Day) In-Reply-To: References: Message-ID: Thanks a lot for the explanation Armin Rigo. > PyPy to be in the same > ballpark performance as V8 Since Debian shootout no longer includes pypy we do not know where to look for good benchmarks, so I had to look at TechEmpower benchmark. Its my fault for looking at TechEmpower benchmark and not realizing they are comparing Apples to Bananas , not even Oranges. >I would ditch PHP in favour of a more reasonable language After years and years of programming in "Main Stream" programming lanaguges for almost a dacade , i discovered python , and ditched all those mainstreams (Java, .Net, PHP and RoR) . I found Python keep getting better and better making my development life easier and easier (Code reuse , Clean but efficient syntax) , But only thing i miss from compiled languages is Performance. Due to first test of pypy (since 1.3 i tried to use pypy but it was not very promising back then) so I was in Fear , Uncertainty and Doubt about PyPy. But since pypy 2.x it become a lot more compatible and Performance becoming really promising So I am thinking seriously these days about switching to PyPy in non-scientific library requirement places (my projects needs a lot of scikit-learn). Thats why I am looking at performance comparisons Seriously.. Sorry if the discussion I've started tainted pypy. On Sun, Sep 7, 2014 at 12:42 PM, Armin Rigo wrote: > Hi all, > > On 7 September 2014 03:01, Maciej Fijalkowski wrote: >> did you read my previous post how this is apples to oranges? You >> should setup some more barebone thing than tornado to "compare" > > Maybe we should mention that PyPy's JIT technology, when applied > straight to the Hippy VM, gives a PHP that is comparable in speed to > HHVM. PyPy itself has seen Python-specific optimizations for a longer > time than Hippy for PHP, however. > > Anyway, our point is that questions like "when will PyPy be as fast as > HHVM" are loaded with the implicit assumption "on this benchmark set > X". We might as well answer "PyPy has been much faster than today's > HHVM since many years" since that's true on benchmark set Y. > Actually, there are such benchmark sets (which we trust more because > they try to compare apples to apples) showing PyPy to be in the same > ballpark performance as V8. But feel free to trust whatever benchmark > set you want, obviously. > > Personally I would ditch PHP in favour of a more reasonable language > any day of the week. Then I'd check that the final performance of my > rewritten app is still ok on . I bet it > would be. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From matti.picus at gmail.com Mon Sep 8 00:23:35 2014 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 08 Sep 2014 01:23:35 +0300 Subject: [pypy-dev] pypy 2.4-beta1 (Snow White) is available for testing Message-ID: <540CDAE7.9020700@gmail.com> Get it while it's hot, let me know if something is wrong. https://bitbucket.org/pypy/pypy/downloads Comments and corrections on the release notice would be appreciated http://pypy.readthedocs.org/en/latest/release-2.4.0.html Please test the MacOS version since we have had reports in the past about buildbot compilation problems. Matti From gestion.investissement at solutiongestion.net Mon Sep 8 02:51:11 2014 From: gestion.investissement at solutiongestion.net (CAS Equipement) Date: Mon, 08 Sep 2014 02:51:11 +0200 Subject: [pypy-dev] Redresser ou dynamiser un magasin Message-ID: If the message doesn't display correctly, click here to see a web-based version. Redresser ou dynamiser un magasin. Difficile ? r?aliser par les temps qui courent Pour Plus d'Informations Tel: 71 25 50 56 25 14 69 09 21 65 26 13 Quelles sont les opportunit?s qui s'offrent ? vous ? Association avec une soci?t? ?trang?re. (? ?tudier) Ou Apport total ou partiel de Fonds ?trangers. C'est peut ?tre la solution H.G.F.C. sp?cialiste de la gestion et d'application d'id?es marketing in?dites --- Ce courrier ?lectronique ne contient aucun virus ou logiciel malveillant parce que la protection avast! Antivirus est active. http://www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: spacer.png Type: image/png Size: 68 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_08.png Type: image/png Size: 281 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_09.png Type: image/png Size: 8528 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_11.png Type: image/png Size: 327 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: global.png Type: image/png Size: 10994 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_21.png Type: image/png Size: 5979 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_26.png Type: image/png Size: 278 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: img_29.png Type: image/png Size: 324 bytes Desc: not available URL: From drsalists at gmail.com Mon Sep 8 05:13:11 2014 From: drsalists at gmail.com (Dan Stromberg) Date: Sun, 7 Sep 2014 20:13:11 -0700 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: <1410116399.2642.6.camel@newpride> References: <540C448F.1050908@gmail.com> <540CA311.30801@gmail.com> <1410116399.2642.6.camel@newpride> Message-ID: On Sun, Sep 7, 2014 at 11:59 AM, Yury V. Zaytsev wrote: > On Sun, 2014-09-07 at 20:25 +0200, Scott West wrote: >> >> Otherwise I can just grab the bytecode or AST and do it myself, but I >> didn't want to replicate any work if I didn't have to :) > > I'm not exactly sure what kind of analysis you have in mind, but you > might wish to have a look at pysonar, if you haven't seen it already: > > https://github.com/yinwang0/pysonar2 > > Maybe it could be a good starting point, although it's in Java... Still depending on what your goals are, you might also look at pylint, pyflakes and pychecker. From arigo at tunes.org Mon Sep 8 10:03:14 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 8 Sep 2014 10:03:14 +0200 Subject: [pypy-dev] pypy 2.4-beta1 (Snow White) is available for testing In-Reply-To: <540CDAE7.9020700@gmail.com> References: <540CDAE7.9020700@gmail.com> Message-ID: Hi Matti, On 8 September 2014 00:23, Matti Picus wrote: > Get it while it's hot, let me know if something is wrong. Great :-) > https://bitbucket.org/pypy/pypy/downloads No 32-bit Linux? About 32-bit Linux, we should remember to update the main download.html when we do the release: I finally upgraded the machine from Ubuntu 10.04 to 12.04 (which is also compatible with 14.04). A bient?t, Armin From 1989lzhh at gmail.com Wed Sep 10 15:18:27 2014 From: 1989lzhh at gmail.com (1989lzhh) Date: Wed, 10 Sep 2014 21:18:27 +0800 Subject: [pypy-dev] PyPy for analysis? In-Reply-To: <1410116399.2642.6.camel@newpride> References: <540C448F.1050908@gmail.com> <540CA311.30801@gmail.com> <1410116399.2642.6.camel@newpride> Message-ID: <008FBB91-24DF-4FCD-803D-00A8604BF47F@gmail.com> I tried to find the solution before and end up to write by myself. You may check my github project 'cyjit' > ? Sep 8, 2014?2:59?"Yury V. Zaytsev" ??? > >> On Sun, 2014-09-07 at 20:25 +0200, Scott West wrote: >> >> Otherwise I can just grab the bytecode or AST and do it myself, but I >> didn't want to replicate any work if I didn't have to :) > > I'm not exactly sure what kind of analysis you have in mind, but you > might wish to have a look at pysonar, if you haven't seen it already: > > https://github.com/yinwang0/pysonar2 > > Maybe it could be a good starting point, although it's in Java... > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From zauddelig at gmail.com Thu Sep 11 10:20:03 2014 From: zauddelig at gmail.com (Fabrizio Messina) Date: Thu, 11 Sep 2014 10:20:03 +0200 Subject: [pypy-dev] I would like to join PyPy development Message-ID: Hello, I would like to join pypy development, I would like to know if there are some minor issues I can help with to get me started. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Sep 12 18:52:21 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 12 Sep 2014 10:52:21 -0600 Subject: [pypy-dev] I would like to join PyPy development In-Reply-To: References: Message-ID: Hi Fabrizio. The easiest way is to come to IRC (which I think you did) and ask around. Yo ucan also browse open issues at https://bitbucket.org/pypy/pypy/issues?status=new&status=open and read "how to contribute" document http://pypy.readthedocs.org/en/latest/how-to-contribute.html Cheers, fijal On Thu, Sep 11, 2014 at 2:20 AM, Fabrizio Messina wrote: > Hello, I would like to join pypy development, I would like to know if there > are some minor issues I can help with to get me started. > Regards. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From pjenvey at underboss.org Fri Sep 12 19:48:36 2014 From: pjenvey at underboss.org (Philip Jenvey) Date: Fri, 12 Sep 2014 10:48:36 -0700 Subject: [pypy-dev] I would like to join PyPy development In-Reply-To: References: Message-ID: <8D51249F-C313-41C9-8006-B364A096D907@underboss.org> Right now there?s a bunch of low hanging fruit on the py3.3 branch, where we?re upgrading PyPy3?s stdlib compatibility to 3.3. You can see the failing stdlib tests here (labeled as app-level tests): http://buildbot.pypy.org/summary?branch=py3.3 A couple that are pretty easy to get started with: test_mmap or test_zlib, among others But I definitely recommend coming by the IRC channel. -- Philip Jenvey On Sep 12, 2014, at 9:52 AM, Maciej Fijalkowski wrote: > Hi Fabrizio. > > The easiest way is to come to IRC (which I think you did) and ask > around. Yo ucan also browse open issues at > https://bitbucket.org/pypy/pypy/issues?status=new&status=open and read > "how to contribute" document > http://pypy.readthedocs.org/en/latest/how-to-contribute.html > > Cheers, > fijal > > On Thu, Sep 11, 2014 at 2:20 AM, Fabrizio Messina wrote: >> Hello, I would like to join pypy development, I would like to know if there >> are some minor issues I can help with to get me started. >> Regards. From partner at leawo.info Tue Sep 16 09:20:55 2014 From: partner at leawo.info (Toby) Date: Tue, 16 Sep 2014 15:20:55 +0800 Subject: [pypy-dev] Donation and Sponsorship Inquiry Message-ID: <2014091615204462586719@leawo.info> Hello pypy developers , This is Toby from Leawo Software(www.leawo.org) and am working there as a BD Director. Am reaching out here to ask are you interested in software donation or host a giveaway for your users or accept the sponsorship in any form? We can offer basically any software you see on our website mentioned above. Thank you and i look forward to your early reply to know more info and discuss more details. Have a great day! Best regards, Toby from Leawo BD Department Skype: Tobyyang84 Office Line: +86 755-26553081-8013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Thu Sep 18 16:59:40 2014 From: ram at rachum.com (Ram Rachum) Date: Thu, 18 Sep 2014 17:59:40 +0300 Subject: [pypy-dev] Pypy3 supporting Python 3.3 Message-ID: Hi everybody, I've been waiting for a a Pypy3 release that supports Python 3.3 for a while now. Today I went on the Pypy site to check whether one has been released. I was happy to see this: the Python3.3 compatible release ? PyPy3 2.3.1 I downloaded the new release, but the prompt I got was this: Python 3.2.5 (986752d005bb, Jun 19 2014, 21:38:38) [PyPy 2.3.1 with MSC v.1500 32 bit] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>> It says "Python 3.2.5" rather than Python 3.3.x. I also tried running some code that uses `yield from` and it failed. So what's going on? Is there a release that supports Python 3.3 or not? I'd be happy to use a beta or even alpha release, as long as it's available in binary form. Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Thu Sep 18 17:41:09 2014 From: matti.picus at gmail.com (Matti Picus) Date: Thu, 18 Sep 2014 18:41:09 +0300 Subject: [pypy-dev] Pypy3 supporting Python 3.3 In-Reply-To: References: Message-ID: <541AFD15.8060909@gmail.com> Sorry, the 3.3 is a typo, it should read 3.2.5 I am fixing it now, thanks for pointing that out. We are progressing with the 3.3 release, help is appreciated. Matti On 18/09/2014 5:59 PM, Ram Rachum wrote: > Hi everybody, > > I've been waiting for a a Pypy3 release that supports Python 3.3 for a > while now. Today I went on the Pypy site to check whether one has been > released. > > I was happy to see this: > > the Python3.3 compatible release ? PyPy3 2.3.1 > > I downloaded the new release, but the prompt I got was this: > > Python 3.2.5 (986752d005bb, Jun 19 2014, 21:38:38) > [PyPy 2.3.1 with MSC v.1500 32 bit] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>>> > > It says "Python 3.2.5" rather than Python 3.3.x. I also tried running > some code that uses `yield from` and it failed. > > So what's going on? Is there a release that supports Python 3.3 or > not? I'd be happy to use a beta or even alpha release, as long as it's > available in binary form. > > > Thanks, > Ram. > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Fri Sep 19 07:11:21 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 19 Sep 2014 07:11:21 +0200 Subject: [pypy-dev] Pypy3 supporting Python 3.3 In-Reply-To: <541AFD15.8060909@gmail.com> References: <541AFD15.8060909@gmail.com> Message-ID: Hi Ram, On 18 September 2014 17:41, Matti Picus wrote: > Sorry, the 3.3 is a typo, it should read 3.2.5 Some in-progress betas are available from: http://buildbot.pypy.org/nightly/py3.3/ Maybe someone working on it can trigger a further, more up-to-date build; most of them happen to be "nojit". A bient?t, Armin From ram at rachum.com Fri Sep 19 10:32:04 2014 From: ram at rachum.com (Ram Rachum) Date: Fri, 19 Sep 2014 11:32:04 +0300 Subject: [pypy-dev] Pypy3 supporting Python 3.3 In-Reply-To: References: <541AFD15.8060909@gmail.com> Message-ID: So no Windows one? On Fri, Sep 19, 2014 at 8:11 AM, Armin Rigo wrote: > Hi Ram, > > On 18 September 2014 17:41, Matti Picus wrote: > > Sorry, the 3.3 is a typo, it should read 3.2.5 > > Some in-progress betas are available from: > > http://buildbot.pypy.org/nightly/py3.3/ > > Maybe someone working on it can trigger a further, more up-to-date > build; most of them happen to be "nojit". > > > A bient?t, > > Armin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Fri Sep 19 12:20:59 2014 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 19 Sep 2014 13:20:59 +0300 Subject: [pypy-dev] PyPy 2.4.0 final prerelease now available Message-ID: <541C038B.8010702@gmail.com> An HTML attachment was scrubbed... URL: From estama at gmail.com Mon Sep 22 19:37:00 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Mon, 22 Sep 2014 20:37:00 +0300 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <541C038B.8010702@gmail.com> References: <541C038B.8010702@gmail.com> Message-ID: <54205E3C.9020408@gmail.com> Hello, First the problem that i have. Right now, when i get a string back from a C function, i have to do 2 copies of it: ffi.cdef(""" const char *getString(...); """) tmp = ffi.string(clib.getString(...)) # 1st copy pystring = tmp.decode('utf-8') # 2nd copy So i thought why not use an ffi.buffer on it and do the decoding directly on the buffer: cstr = ffi.new('char []', 'abcd') b = unicode(ffi.buffer(cstr), 'utf-8') Above works. But the problem is that in C a function that returns an array cannot be declared. So i cannot do a: b = unicode( ffi.buffer( clib.getString(...) ) ,'utf-8') because it'll only return the first character of getString, due to being declared as a 'char*'. Is there any way in CFFI to declare a function as returning a 'char[]' so as a buffer can be directly used on its results? Thank you. l. From matti.picus at gmail.com Mon Sep 22 20:19:16 2014 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 22 Sep 2014 21:19:16 +0300 Subject: [pypy-dev] PyPy 2.4.0 has been released Message-ID: <54206824.3050207@gmail.com> Thanks to bug reporters and fixers, we have come out of the beta cycle with a much better product, a brand new, shiny PyPy 2.4.0, available now. http://morepypy.blogspot.com/2014/09/pypy-240-released-9-days-left-in.html http://doc.pypy.org/en/latest/release-2.4.0.html Please consider helping out in the final days of our funding drive, only 9 days left to obtain a matching donation from the PSF. Matti From arigo at tunes.org Tue Sep 23 08:22:27 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 23 Sep 2014 08:22:27 +0200 Subject: [pypy-dev] PyPy Warsaw Sprint (October 21-25th, 2014) Message-ID: Hi all, Here's the announcement (below) for the next PyPy sprint, in one month's time in Warsaw. It will take place just after the Polish PyCon Pl'14 conference, which is also in Poland, although not in Warsaw. See http://pl.pycon.org/2014/en/ in case you're interested. (There is of course no need to attend one in order to attend the other.) Armin ===================================================================== PyPy Warsaw Sprint (October 21-25th, 2014) ===================================================================== The next PyPy sprint will be in Warsaw, Poland for the first time. This is a fully public sprint. PyPy sprints are a very good way to get into PyPy development and no prior PyPy knowledge is necessary. ------------------------------ Goals and topics of the sprint ------------------------------ For newcomers: * Bring your application or library and we'll help you port it to PyPy (if needed), benchmark and profile. * The easiest way to start hacking on PyPy is to write support for some missing Python 3.3 functionality, or to work on numpy. We'll also work on more specific topics, depending on who is here and what their interest is, like some missing GC/JIT optimizations, software transactional memory, etc. ----------- Exact times ----------- The work days should be October 21st - 25th, 2014. There might be a day or an afternoon of break in the middle. We'll typically start at 10:00 in the morning. ------------ Location ------------ The sprint will happen within a room of Warsaw University. The address is Pasteura 5 (which is a form of "Pasteur street"), dept. of Physics, room 450. The person of contact is Maciej Fijalkowski. -------------- Registration -------------- If you want to attend, please register by adding yourself to the "people.txt" file in Mercurial:: https://bitbucket.org/pypy/extradoc/ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/warsaw-2014 or on the pypy-dev mailing list if you do not yet have check-in rights:: http://mail.python.org/mailman/listinfo/pypy-dev Remember that Poland is a regular Schengen zone EU country, with main-EU-zone power adapters. From arigo at tunes.org Tue Sep 23 08:38:29 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 23 Sep 2014 08:38:29 +0200 Subject: [pypy-dev] PyPy 2.4.0 has been released In-Reply-To: <54206824.3050207@gmail.com> References: <54206824.3050207@gmail.com> Message-ID: Re-hi, On 22 September 2014 20:19, Matti Picus wrote: > Thanks to bug reporters and fixers, we have come out of the beta cycle with > a much better product, a brand new, shiny PyPy 2.4.0, available now. > http://morepypy.blogspot.com/2014/09/pypy-240-released-9-days-left-in.html Please note the file dates: if you downloaded one of the pypy-2.4.0-* files before the official release, a new version may have been re-uploaded in the meantime under the same name. In case of doubt, check that you got the same md5/sha1 hashes as http://pypy.org/download.html says. Here they are, too: 63bd68546f60cf5921ba7654f3fe47aa pypy-2.4.0-linux64.tar.bz2 6c9b444a1cd090ab7b43083a24e07734 pypy-2.4.0-linux-armel.tar.bz2 5ff951da5989a00e01611678c311f8af pypy-2.4.0-linux-armhf-raring.tar.bz2 d7540883a52f91433da62b0cdfaaa30f pypy-2.4.0-linux-armhf-raspbian.tar.bz2 77a971f5198685ff60528de5def407dd pypy-2.4.0-linux.tar.bz2 07896c0ac37f82884e021c9a4514f479 pypy-2.4.0-osx64.tar.bz2 6a25a212e7c5121f1f3988c118d05695 pypy-2.4.0-src.tar.bz2 907d6fbabc5bcd5bafdcf02a76a8ca33 pypy-2.4.0-win32.zip c362247226f1cde2b957ab5e885f41475381553b pypy-2.4.0-linux64.tar.bz2 d542ee549ded9face573ac9fb49a23f5a5b4be60 pypy-2.4.0-linux-armel.tar.bz2 b8e02dc381e5040e2bf50541e82f0148f9a46a48 pypy-2.4.0-linux-armhf-raring.tar.bz2 ad65e7ddb1582b465a37090dc4a13bc37a8cd95d pypy-2.4.0-linux-armhf-raspbian.tar.bz2 fd52b42069287ca11e816c8e18fc95f53542c73d pypy-2.4.0-linux.tar.bz2 aa7f9b41d8bfda16239b629cd1b8dc884c2ad808 pypy-2.4.0-osx64.tar.bz2 e2e0bcf8457c0ae5a24f126a60aa921dabfe60fb pypy-2.4.0-src.tar.bz2 b72c3365c23c34ffd35a781fb72d8722e0b7517e pypy-2.4.0-win32.zip A bient?t, Armin. From arigo at tunes.org Tue Sep 23 08:52:14 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 23 Sep 2014 08:52:14 +0200 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <54205E3C.9020408@gmail.com> References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> Message-ID: Hi Lefteris, On 22 September 2014 19:37, Eleytherios Stamatogiannakis wrote: > b = unicode( ffi.buffer( clib.getString(...) ) ,'utf-8') > > because it'll only return the first character of getString, due to being > declared as a 'char*'. The issue is only that ffi.buffer() tries to guess how long a buffer you're giving it, and with "char *" the guess is one (only ffi.string() has logic to look for the final null character in the array). You need to get its length explicitly, for example like this: p = clib.getString(...) # a "char *" length = clib.strlen(p) # the standard strlen() function from C b = unicode(ffi.buffer(p, length), 'utf-8') A bient?t, Armin. From dynamicgl at gmail.com Tue Sep 23 10:31:50 2014 From: dynamicgl at gmail.com (Gelin Yan) Date: Tue, 23 Sep 2014 16:31:50 +0800 Subject: [pypy-dev] What is the max heap size pypy can manage well? Message-ID: Hi All I am interested in using pypy for production now. Due to I need to handle a large number of concurrent connections (almost 500K). I want to know the max heap that pypy can manage. As I know V8 still have a limit on 1.4G heap size, I am not sure whether pypy has a similar limitation like that. Thanks. Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostia.lopuhin at gmail.com Tue Sep 23 10:58:46 2014 From: kostia.lopuhin at gmail.com (=?UTF-8?B?0JrQvtGB0YLRjyDQm9C+0L/Rg9GF0LjQvQ==?=) Date: Tue, 23 Sep 2014 12:58:46 +0400 Subject: [pypy-dev] What is the max heap size pypy can manage well? In-Reply-To: References: Message-ID: I think pypy developers will clarify about the limit (I don't think there is any), but we run 5-15 Gb pypy processes in production, although the use case is very different from yours. 2014-09-23 12:31 GMT+04:00 Gelin Yan : > Hi All > > I am interested in using pypy for production now. Due to I need to handle > a large number of concurrent connections (almost 500K). I want to know the > max heap that pypy can manage. > > As I know V8 still have a limit on 1.4G heap size, I am not sure whether > pypy has a similar limitation like that. Thanks. > > Regards > > gelin yan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Tue Sep 23 11:05:54 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 23 Sep 2014 11:05:54 +0200 Subject: [pypy-dev] What is the max heap size pypy can manage well? In-Reply-To: References: Message-ID: PyPy has no limitation on the heap size it can handle. On Tue, Sep 23, 2014 at 10:31 AM, Gelin Yan wrote: > Hi All > > I am interested in using pypy for production now. Due to I need to handle > a large number of concurrent connections (almost 500K). I want to know the > max heap that pypy can manage. > > As I know V8 still have a limit on 1.4G heap size, I am not sure whether > pypy has a similar limitation like that. Thanks. > > Regards > > gelin yan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From estama at gmail.com Tue Sep 23 14:54:22 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Tue, 23 Sep 2014 15:54:22 +0300 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> Message-ID: <54216D7E.6090208@gmail.com> On 23/09/14 09:52, Armin Rigo wrote: > Hi Lefteris, > > On 22 September 2014 19:37, Eleytherios Stamatogiannakis > wrote: >> b = unicode( ffi.buffer( clib.getString(...) ) ,'utf-8') >> >> because it'll only return the first character of getString, due to being >> declared as a 'char*'. > > The issue is only that ffi.buffer() tries to guess how long a buffer > you're giving it, and with "char *" the guess is one (only > ffi.string() has logic to look for the final null character in the > array). If only ffi.string has logic to look for the final null character, then how can below work? >> teststr=ffi.new('char[]', 'asdfasdfasdfasdfasdfasdf') >> unicode(ffi.buffer(teststr), 'utf-8') u'asdfasdfasdfasdfasdfasdf\x00' Above doesn't explicitly set the length in ffi.buffer. There is still one problem with ffi.buffer and the last "\x00" in input, but otherwise it works with only 1 copy to go from a char* to a Python unicode string. The problem is that i cannot declare a C function as returning a char[] so that ffi.buffer will have the same behaviour on its results as it has with above "teststr". > You need to get its length explicitly, for example like this: > > p = clib.getString(...) # a "char *" > length = clib.strlen(p) # the standard strlen() function from C > b = unicode(ffi.buffer(p, length), 'utf-8') I've tried that, and the overhead of the second call is more or less equal to the cost of the copy when using ffi.string. Kind regards, l. From yyc1992 at gmail.com Tue Sep 23 15:42:43 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Tue, 23 Sep 2014 09:42:43 -0400 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <54216D7E.6090208@gmail.com> References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> Message-ID: On Tue, Sep 23, 2014 at 8:54 AM, Eleytherios Stamatogiannakis < estama at gmail.com> wrote: > On 23/09/14 09:52, Armin Rigo wrote: > >> Hi Lefteris, >> >> On 22 September 2014 19:37, Eleytherios Stamatogiannakis >> wrote: >> >>> b = unicode( ffi.buffer( clib.getString(...) ) ,'utf-8') >>> >>> because it'll only return the first character of getString, due to being >>> declared as a 'char*'. >>> >> >> The issue is only that ffi.buffer() tries to guess how long a buffer >> you're giving it, and with "char *" the guess is one (only >> ffi.string() has logic to look for the final null character in the >> array). >> > > If only ffi.string has logic to look for the final null character, then > how can below work? > > >> teststr=ffi.new('char[]', 'asdfasdfasdfasdfasdfasdf') > >> unicode(ffi.buffer(teststr), 'utf-8') > u'asdfasdfasdfasdfasdfasdf\x00' > > Above doesn't explicitly set the length in ffi.buffer. There is still one > problem with ffi.buffer and the last "\x00" in input, but otherwise it > works with only 1 copy to go from a char* to a Python unicode string. > The first line you have returns an object that owns the memory and therefore knows how long it is, which is later used by ffi.buffer to figure out how long the buffer is. Also notice that the result of the second line has '\x00' at the end. This also work even if the string has null bytes in the middle In [9]: unicode(ffi.buffer(ffi.new('char[]', '\0a'))) Out[9]: u'\x00a\x00' > The problem is that i cannot declare a C function as returning a char[] so > that ffi.buffer will have the same behaviour on its results as it has with > above "teststr". > > You need to get its length explicitly, for example like this: >> >> p = clib.getString(...) # a "char *" >> length = clib.strlen(p) # the standard strlen() function from C >> b = unicode(ffi.buffer(p, length), 'utf-8') >> > > I've tried that, and the overhead of the second call is more or less equal > to the cost of the copy when using ffi.string. > > Kind regards, > > > l. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dynamicgl at gmail.com Tue Sep 23 18:03:36 2014 From: dynamicgl at gmail.com (Gelin Yan) Date: Wed, 24 Sep 2014 00:03:36 +0800 Subject: [pypy-dev] What is the max heap size pypy can manage well? In-Reply-To: References: Message-ID: On Tue, Sep 23, 2014 at 5:05 PM, Maciej Fijalkowski wrote: > PyPy has no limitation on the heap size it can handle. > > On Tue, Sep 23, 2014 at 10:31 AM, Gelin Yan wrote: > > Hi All > > > > I am interested in using pypy for production now. Due to I need to > handle > > a large number of concurrent connections (almost 500K). I want to know > the > > max heap that pypy can manage. > > > > As I know V8 still have a limit on 1.4G heap size, I am not sure > whether > > pypy has a similar limitation like that. Thanks. > > > > Regards > > > > gelin yan > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > Hi Fij Thanks for your clarification. I am going to try pypy 2.4 after a few days. Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Wed Sep 24 13:56:10 2014 From: ram at rachum.com (Ram Rachum) Date: Wed, 24 Sep 2014 14:56:10 +0300 Subject: [pypy-dev] Pypy3 supporting Python 3.3 In-Reply-To: <541BEABC.8070302@gmail.com> References: <541AFD15.8060909@gmail.com> <541BEABC.8070302@gmail.com> Message-ID: Is this still broken? (I'm waiting for Windows binaries of Py3.3 development builds.) On Fri, Sep 19, 2014 at 11:35 AM, Matti Picus wrote: > Something is broken, I just tried to debug and wrote about it on IRC > https://botbot.me/freenode/pypy/ > It seems to be in the ffi mechanism, but I don't have more time today to > play > > > > On 19/09/2014 11:32 AM, Ram Rachum wrote: > > So no Windows one? > > On Fri, Sep 19, 2014 at 8:11 AM, Armin Rigo wrote: > >> Hi Ram, >> >> On 18 September 2014 17:41, Matti Picus wrote: >> > Sorry, the 3.3 is a typo, it should read 3.2.5 >> >> Some in-progress betas are available from: >> >> http://buildbot.pypy.org/nightly/py3.3/ >> >> Maybe someone working on it can trigger a further, more up-to-date >> build; most of them happen to be "nojit". >> >> >> A bient?t, >> >> Armin >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Wed Sep 24 14:29:21 2014 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 24 Sep 2014 15:29:21 +0300 Subject: [pypy-dev] Pypy3 supporting Python 3.3 In-Reply-To: References: <541AFD15.8060909@gmail.com> <541BEABC.8070302@gmail.com> Message-ID: <5422B921.2080701@gmail.com> Someone needs to fix this error (on py3.3 branch) NotImplementedError: On windows, os.replace() should overwrite the destination from\rlib\rposix.py", line 178, in replace On 24/09/2014 2:56 PM, Ram Rachum wrote: > Is this still broken? (I'm waiting for Windows binaries of Py3.3 > development builds.) > > On Fri, Sep 19, 2014 at 11:35 AM, Matti Picus > wrote: > > Something is broken, I just tried to debug and wrote about it on IRC > https://botbot.me/freenode/pypy/ > It seems to be in the ffi mechanism, but I don't have more time > today to play > > > > On 19/09/2014 11:32 AM, Ram Rachum wrote: >> So no Windows one? >> >> On Fri, Sep 19, 2014 at 8:11 AM, Armin Rigo > > wrote: >> >> Hi Ram, >> >> On 18 September 2014 17:41, Matti Picus >> > wrote: >> > Sorry, the 3.3 is a typo, it should read 3.2.5 >> >> Some in-progress betas are available from: >> >> http://buildbot.pypy.org/nightly/py3.3/ >> >> Maybe someone working on it can trigger a further, more >> up-to-date >> build; most of them happen to be "nojit". >> >> >> A bient?t, >> >> Armin >> >> > > From arigo at tunes.org Wed Sep 24 19:13:00 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 24 Sep 2014 19:13:00 +0200 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <54216D7E.6090208@gmail.com> References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> Message-ID: Hi, On 23 September 2014 14:54, Eleytherios Stamatogiannakis wrote: >> p = clib.getString(...) # a "char *" >> length = clib.strlen(p) # the standard strlen() function from C >> b = unicode(ffi.buffer(p, length), 'utf-8') > > I've tried that, and the overhead of the second call is more or less equal > to the cost of the copy when using ffi.string. You cannot have a C function returning a 'char[]'. That's why you need to declare it returning a 'char *', and then you don't know the length. Sorry, it's the way C works; there is nothing I can do about that :-) Occasionally, we see C functions with this kind of signature: size_t getString(xxx, char **result); This would return the length, and use 'result' as an output parameter, to store into '*result' a pointer to the string. If you really care about performance, then you might want to change the C library you're binding to in order to do that. A bient?t, Armin. From estama at gmail.com Thu Sep 25 09:06:18 2014 From: estama at gmail.com (Elefterios Stamatogiannakis) Date: Thu, 25 Sep 2014 10:06:18 +0300 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> Message-ID: <5423BEEA.8060608@gmail.com> On 24/09/14 20:13, Armin Rigo wrote: > Hi, > > On 23 September 2014 14:54, Eleytherios Stamatogiannakis > wrote: >>> p = clib.getString(...) # a "char *" >>> length = clib.strlen(p) # the standard strlen() function from C >>> b = unicode(ffi.buffer(p, length), 'utf-8') >> >> I've tried that, and the overhead of the second call is more or less equal >> to the cost of the copy when using ffi.string. > > You cannot have a C function returning a 'char[]'. That's why you > need to declare it returning a 'char *', and then you don't know the > length. Sorry, it's the way C works; there is nothing I can do about > that :-) Thank you for clarifying. I thought that ffi.buffer scanned for the \0 to find the end of the string for "char[]" types. > Occasionally, we see C functions with this kind of signature: > > size_t getString(xxx, char **result); > > This would return the length, and use 'result' as an output parameter, > to store into '*result' a pointer to the string. If you really care > about performance, then you might want to change the C library you're > binding to in order to do that. Unfortunately, the C library that i use (libsqlite3) does not provide a function like that :( . It has a function that returns the size of the string, but in my tests the overhead of doing another CFFI call (to find the size) is greater than doing the 2nd copy (depending on the average string size). We are doing 100s of millions of string passing calls back and forth from the libsqlite3 library, so any way to improve the efficiency of this case would be more than welcome :) . Best regards, l. From arigo at tunes.org Thu Sep 25 14:10:38 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 25 Sep 2014 14:10:38 +0200 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <5423BEEA.8060608@gmail.com> References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> <5423BEEA.8060608@gmail.com> Message-ID: Hi, On 25 September 2014 09:06, Elefterios Stamatogiannakis wrote: > Unfortunately, the C library that i use (libsqlite3) does not provide a > function like that :( . It has a function that returns the size of the > string, but in my tests the overhead of doing another CFFI call (to find the > size) is greater than doing the 2nd copy (depending on the average string > size). In general, if performance is an issue, particularly if you're running CPython (as opposed to PyPy), you can try to write small helpers in C that regroup a few operations. This can reduce the overhead of doing two calls instead of one. In this case, you can write this in the ffi.verify() part: size_t myGetString(xxx, char **presult) { *presult = getString(xxx); return strlen(*presult); } and then in Python you'd declare the function 'myGetString', and use it like that: p = ffi.new("char *[1]") # you can put this before some loop ... size = lib.myGetString(xxx, p) ..ffi.buffer(p[0], size).. A bient?t, Armin. From estama at gmail.com Thu Sep 25 16:57:54 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 25 Sep 2014 17:57:54 +0300 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> <5423BEEA.8060608@gmail.com> Message-ID: <54242D72.2090507@gmail.com> On 25/09/14 15:10, Armin Rigo wrote: > Hi, > > On 25 September 2014 09:06, Elefterios Stamatogiannakis > wrote: >> Unfortunately, the C library that i use (libsqlite3) does not provide a >> function like that :( . It has a function that returns the size of the >> string, but in my tests the overhead of doing another CFFI call (to find the >> size) is greater than doing the 2nd copy (depending on the average string >> size). > > In general, if performance is an issue, particularly if you're running > CPython (as opposed to PyPy), you can try to write small helpers in C > that regroup a few operations. This can reduce the overhead of doing > two calls instead of one. In this case, you can write this in the > ffi.verify() part: These tests i'm writting about use PyPy only. In CPython i use a native C wrapper (APSW). I try to not use ffi.verify because i want the program to be easily deployable. Also i want to test the maximum performance of CFFI's API. > size_t myGetString(xxx, char **presult) > { > *presult = getString(xxx); > return strlen(*presult); > } > > and then in Python you'd declare the function 'myGetString', and use > it like that: > > p = ffi.new("char *[1]") # you can put this before some loop > ... > size = lib.myGetString(xxx, p) > ..ffi.buffer(p[0], size).. > Wouldn't an "strbuffer" that does this scan (opportunistically) be faster for cases like above? Thank you very much for your suggestions. l. From eugeniocanom at gmail.com Thu Sep 25 18:01:38 2014 From: eugeniocanom at gmail.com (Eugenio Cano-Manuel Mendoza) Date: Thu, 25 Sep 2014 17:01:38 +0100 Subject: [pypy-dev] Problem with PyPy3 and lxml Message-ID: Hello, I'm currently trying to install lxml inside a virtual environment using PyPy3 (pypy3 2.3.1). During the compilation part, lxml fails to build. I asked on #pypy and they pointed me to lxml-cffi but it fails to build as well (error message here http://pastebin.com/LUw3FrDK). It works without a problem using PyPy 2.2.*, but it doesn't have Python 3 support. Cheers, Eugenio -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaldridge at gmail.com Fri Sep 26 18:22:44 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 26 Sep 2014 10:22:44 -0600 Subject: [pypy-dev] What is ABORT_ESCAPE? Message-ID: I have a JIT I've been working on for a few days now, and initial results were awesome, the JIT log showed just a few assembly ops to execute each iteration of a simple "count to 10000" loop. However, then I changed something and the traces stopped being generated. I hooked up the JIT hooks and noticed that about every 1000 iterations I'd get "ABORT_ESCAPE". After a bit more printing I get this: https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt Now if I disable virtualizables, everything works fine. What should I be looking for to troubleshoot this? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Sep 26 18:27:28 2014 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 26 Sep 2014 12:27:28 -0400 Subject: [pypy-dev] What is ABORT_ESCAPE? In-Reply-To: References: Message-ID: <1411748848.1468997.172144029.0DFEDE51@webmail.messagingengine.com> On Fri, Sep 26, 2014, at 12:22, Timothy Baldridge wrote: > I have a JIT I've been working on for a few days now, and initial results > were awesome, the JIT log showed just a few assembly ops to execute each > iteration of a simple "count to 10000" loop. However, then I changed > something and the traces stopped being generated. > > I hooked up the JIT hooks and noticed that about every 1000 iterations > I'd > get "ABORT_ESCAPE". After a bit more printing I get this: > https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt > > Now if I disable virtualizables, everything works fine. What should I be > looking for to troubleshoot this? It means a virtualizable was used a function that JIT did not trace through (a "residual call"). Hopefully, you can either allow the JIT to trace through that call or get the function to not use the virtualizable. From fijall at gmail.com Fri Sep 26 18:28:36 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 26 Sep 2014 18:28:36 +0200 Subject: [pypy-dev] What is ABORT_ESCAPE? In-Reply-To: References: Message-ID: ABORT_ESCAPE means that the virtualizable was accessed from outside the JIT during tracing. This is a big no-no, virtualizables are meant to be used *only* from the JIT and not from the outside. If you look at the trace up to that point, it should be relatively obvious what's happening (e.g. something is a call and not inlined). The trace is an argument to the JIT hook. On Fri, Sep 26, 2014 at 6:22 PM, Timothy Baldridge wrote: > I have a JIT I've been working on for a few days now, and initial results > were awesome, the JIT log showed just a few assembly ops to execute each > iteration of a simple "count to 10000" loop. However, then I changed > something and the traces stopped being generated. > > I hooked up the JIT hooks and noticed that about every 1000 iterations I'd > get "ABORT_ESCAPE". After a bit more printing I get this: > https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt > > Now if I disable virtualizables, everything works fine. What should I be > looking for to troubleshoot this? > > Thanks, > > Tim > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From arigo at tunes.org Fri Sep 26 18:51:28 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 26 Sep 2014 18:51:28 +0200 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: <54242D72.2090507@gmail.com> References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> <5423BEEA.8060608@gmail.com> <54242D72.2090507@gmail.com> Message-ID: Hi, On 25 September 2014 16:57, Eleytherios Stamatogiannakis wrote: > Wouldn't an "strbuffer" that does this scan (opportunistically) be faster > for cases like above? No, it can't be faster than my last solution. There is no way we're going to add custom logic for a special case into the general ffi library. If you don't want to use ffi.verify(), then you're stuck with two calls instead of one. On PyPy, try the latest version (2.4.0); it reduces the overhead of each call, so the cost of doing two calls instead of one is much lower. A bient?t, Armin. From arigo at tunes.org Fri Sep 26 18:53:33 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 26 Sep 2014 18:53:33 +0200 Subject: [pypy-dev] Problem with PyPy3 and lxml In-Reply-To: References: Message-ID: Hi, On 25 September 2014 18:01, Eugenio Cano-Manuel Mendoza wrote: > (error message here http://pastebin.com/LUw3FrDK) A bug of PyPy3. Please report it to our bug tracker. I don't know if it's already known or already fixed, though. A bient?t, Armin. From vincent.legoll at gmail.com Fri Sep 26 19:26:28 2014 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Fri, 26 Sep 2014 19:26:28 +0200 Subject: [pypy-dev] Declaring a function that returns a string in CFFI In-Reply-To: References: <541C038B.8010702@gmail.com> <54205E3C.9020408@gmail.com> <54216D7E.6090208@gmail.com> <5423BEEA.8060608@gmail.com> <54242D72.2090507@gmail.com> Message-ID: Hello, maybe the code above / inside getstring already knows that string length, and you could exploit that fact to avoid the strlen calculation... On Fri, Sep 26, 2014 at 6:51 PM, Armin Rigo wrote: > Hi, > > On 25 September 2014 16:57, Eleytherios Stamatogiannakis > wrote: > > Wouldn't an "strbuffer" that does this scan (opportunistically) be faster > > for cases like above? > > No, it can't be faster than my last solution. There is no way we're > going to add custom logic for a special case into the general ffi > library. If you don't want to use ffi.verify(), then you're stuck > with two calls instead of one. On PyPy, try the latest version > (2.4.0); it reduces the overhead of each call, so the cost of doing > two calls instead of one is much lower. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Vincent Legoll -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaldridge at gmail.com Fri Sep 26 19:55:39 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Fri, 26 Sep 2014 11:55:39 -0600 Subject: [pypy-dev] What is ABORT_ESCAPE? In-Reply-To: References: Message-ID: So I've tried several things, but I'm still unable to figure out where the trace ends. It seems that the green values handed to the hook are for the start of the trace, which doesn't really help me with finding out where the trace aborts. "operations" seems to be a list of trace operations. I tried printing .getarglist() off the last item in that list, but I'm not exactly sure what that data contains. What I'm trying to get to is something that says "at a call to function X you tried to pass in a frame, now we have to force the frame". Any thoughts? Tim On Fri, Sep 26, 2014 at 10:28 AM, Maciej Fijalkowski wrote: > ABORT_ESCAPE means that the virtualizable was accessed from outside > the JIT during tracing. This is a big no-no, virtualizables are meant > to be used *only* from the JIT and not from the outside. If you look > at the trace up to that point, it should be relatively obvious what's > happening (e.g. something is a call and not inlined). The trace is an > argument to the JIT hook. > > On Fri, Sep 26, 2014 at 6:22 PM, Timothy Baldridge > wrote: > > I have a JIT I've been working on for a few days now, and initial results > > were awesome, the JIT log showed just a few assembly ops to execute each > > iteration of a simple "count to 10000" loop. However, then I changed > > something and the traces stopped being generated. > > > > I hooked up the JIT hooks and noticed that about every 1000 iterations > I'd > > get "ABORT_ESCAPE". After a bit more printing I get this: > > > https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt > > > > Now if I disable virtualizables, everything works fine. What should I be > > looking for to troubleshoot this? > > > > Thanks, > > > > Tim > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Sep 26 20:10:51 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 26 Sep 2014 20:10:51 +0200 Subject: [pypy-dev] What is ABORT_ESCAPE? In-Reply-To: References: Message-ID: "operations" will include a call to something that you don't want to be a call, essentially On Fri, Sep 26, 2014 at 7:55 PM, Timothy Baldridge wrote: > So I've tried several things, but I'm still unable to figure out where the > trace ends. It seems that the green values handed to the hook are for the > start of the trace, which doesn't really help me with finding out where the > trace aborts. > > "operations" seems to be a list of trace operations. I tried printing > .getarglist() off the last item in that list, but I'm not exactly sure what > that data contains. > > What I'm trying to get to is something that says "at a call to function X > you tried to pass in a frame, now we have to force the frame". > > Any thoughts? > > Tim > > On Fri, Sep 26, 2014 at 10:28 AM, Maciej Fijalkowski > wrote: >> >> ABORT_ESCAPE means that the virtualizable was accessed from outside >> the JIT during tracing. This is a big no-no, virtualizables are meant >> to be used *only* from the JIT and not from the outside. If you look >> at the trace up to that point, it should be relatively obvious what's >> happening (e.g. something is a call and not inlined). The trace is an >> argument to the JIT hook. >> >> On Fri, Sep 26, 2014 at 6:22 PM, Timothy Baldridge >> wrote: >> > I have a JIT I've been working on for a few days now, and initial >> > results >> > were awesome, the JIT log showed just a few assembly ops to execute each >> > iteration of a simple "count to 10000" loop. However, then I changed >> > something and the traces stopped being generated. >> > >> > I hooked up the JIT hooks and noticed that about every 1000 iterations >> > I'd >> > get "ABORT_ESCAPE". After a bit more printing I get this: >> > >> > https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt >> > >> > Now if I disable virtualizables, everything works fine. What should I be >> > looking for to troubleshoot this? >> > >> > Thanks, >> > >> > Tim >> > >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev >> > > > > > > -- > ?One of the main causes of the fall of the Roman Empire was that?lacking > zero?they had no way to indicate successful termination of their C > programs.? > (Robert Firth) From fijall at gmail.com Fri Sep 26 20:11:16 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 26 Sep 2014 20:11:16 +0200 Subject: [pypy-dev] What is ABORT_ESCAPE? In-Reply-To: References: Message-ID: we can maybe provide some better support. Can you run the JIT stuff untranslated? then it should be easy. Also you can (harder) poke in gdb On Fri, Sep 26, 2014 at 8:10 PM, Maciej Fijalkowski wrote: > "operations" will include a call to something that you don't want to > be a call, essentially > > On Fri, Sep 26, 2014 at 7:55 PM, Timothy Baldridge wrote: >> So I've tried several things, but I'm still unable to figure out where the >> trace ends. It seems that the green values handed to the hook are for the >> start of the trace, which doesn't really help me with finding out where the >> trace aborts. >> >> "operations" seems to be a list of trace operations. I tried printing >> .getarglist() off the last item in that list, but I'm not exactly sure what >> that data contains. >> >> What I'm trying to get to is something that says "at a call to function X >> you tried to pass in a frame, now we have to force the frame". >> >> Any thoughts? >> >> Tim >> >> On Fri, Sep 26, 2014 at 10:28 AM, Maciej Fijalkowski >> wrote: >>> >>> ABORT_ESCAPE means that the virtualizable was accessed from outside >>> the JIT during tracing. This is a big no-no, virtualizables are meant >>> to be used *only* from the JIT and not from the outside. If you look >>> at the trace up to that point, it should be relatively obvious what's >>> happening (e.g. something is a call and not inlined). The trace is an >>> argument to the JIT hook. >>> >>> On Fri, Sep 26, 2014 at 6:22 PM, Timothy Baldridge >>> wrote: >>> > I have a JIT I've been working on for a few days now, and initial >>> > results >>> > were awesome, the JIT log showed just a few assembly ops to execute each >>> > iteration of a simple "count to 10000" loop. However, then I changed >>> > something and the traces stopped being generated. >>> > >>> > I hooked up the JIT hooks and noticed that about every 1000 iterations >>> > I'd >>> > get "ABORT_ESCAPE". After a bit more printing I get this: >>> > >>> > https://gist.githubusercontent.com/halgari/3cd3cd10f359f2103b89/raw/d8f335f72af5cf13c0b47b26e6d1e8b5c91b02ab/gistfile1.txt >>> > >>> > Now if I disable virtualizables, everything works fine. What should I be >>> > looking for to troubleshoot this? >>> > >>> > Thanks, >>> > >>> > Tim >>> > >>> > >>> > _______________________________________________ >>> > pypy-dev mailing list >>> > pypy-dev at python.org >>> > https://mail.python.org/mailman/listinfo/pypy-dev >>> > >> >> >> >> >> -- >> ?One of the main causes of the fall of the Roman Empire was that?lacking >> zero?they had no way to indicate successful termination of their C >> programs.? >> (Robert Firth) From pjenvey at underboss.org Sat Sep 27 19:22:09 2014 From: pjenvey at underboss.org (Philip Jenvey) Date: Sat, 27 Sep 2014 10:22:09 -0700 Subject: [pypy-dev] Problem with PyPy3 and lxml In-Reply-To: References: Message-ID: <10B2DA4D-5C1D-487F-B5E2-BEAD547BE1AA@underboss.org> On Sep 25, 2014, at 9:01 AM, Eugenio Cano-Manuel Mendoza wrote: > Hello, > > I'm currently trying to install lxml inside a virtual environment using PyPy3 (pypy3 2.3.1). During the compilation part, lxml fails to build. I asked on #pypy and they pointed me to lxml-cffi but it fails to build as well (error message here http://pastebin.com/LUw3FrDK). It works without a problem using PyPy 2.2.*, but it doesn't have Python 3 support. Hey there, The crash you?re seeing has been fixed in 5f38597ef8a9 and will be in the PyPy3 2.4 release coming shortly. With the fix, lxml-cffi unfortunately still won?t install, however. It appears to have at least some Py 2 only code that needs porting to 3. -- Philip Jenvey From kostia.lopuhin at gmail.com Sat Sep 27 21:09:24 2014 From: kostia.lopuhin at gmail.com (=?UTF-8?B?0JrQvtGB0YLRjyDQm9C+0L/Rg9GF0LjQvQ==?=) Date: Sat, 27 Sep 2014 23:09:24 +0400 Subject: [pypy-dev] PyPy Warsaw Sprint (October 21-25th, 2014) In-Reply-To: References: Message-ID: Hi! My name is Kostia Lopuhin, and I would like to come to the sprint. If all goes well, I will be at the sprint 21-24th and a part of 25th. I would like to work on JIT optimizations, I am particularly interested in improving short loops, but understanding that this is a complex topic, I just want to start with something :) Also maybe I can work on using to use CPython modules from PyPy, using this http://morepypy.blogspot.ru/2011/12/plotting-using-matplotlib-from-pypy.html embedding trick by Maciej Fijalkowski, I extended it a little here https://bitbucket.org/kostialopuhin/embed-cpython 2014-09-23 10:22 GMT+04:00 Armin Rigo : > Hi all, > > Here's the announcement (below) for the next PyPy sprint, in one > month's time in Warsaw. It will take place just after the Polish > PyCon Pl'14 conference, which is also in Poland, although not in > Warsaw. See http://pl.pycon.org/2014/en/ in case you're interested. > (There is of course no need to attend one in order to attend the > other.) > > Armin > > > ===================================================================== > PyPy Warsaw Sprint (October 21-25th, 2014) > ===================================================================== > > The next PyPy sprint will be in Warsaw, Poland for the first > time. This is a fully public sprint. PyPy sprints are a very good way > to get into PyPy development and no prior PyPy knowledge is necessary. > > > ------------------------------ > Goals and topics of the sprint > ------------------------------ > > For newcomers: > > * Bring your application or library and we'll help you port it to PyPy > (if needed), benchmark and profile. > > * The easiest way to start hacking on PyPy is to write support for > some missing Python 3.3 functionality, or to work on numpy. > > We'll also work on more specific topics, depending on who is here > and what their interest is, like some missing GC/JIT optimizations, > software transactional memory, etc. > > > ----------- > Exact times > ----------- > > The work days should be October 21st - 25th, 2014. There might be > a day or an afternoon of break in the middle. We'll typically start > at 10:00 in the morning. > > > ------------ > Location > ------------ > > The sprint will happen within a room of Warsaw University. The > address is Pasteura 5 (which is a form of "Pasteur street"), dept. of > Physics, room 450. The person of contact is Maciej Fijalkowski. > > > -------------- > Registration > -------------- > > If you want to attend, please register by adding yourself to the > "people.txt" file in Mercurial:: > > https://bitbucket.org/pypy/extradoc/ > https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/warsaw-2014 > > or on the pypy-dev mailing list if you do not yet have check-in rights:: > > http://mail.python.org/mailman/listinfo/pypy-dev > > Remember that Poland is a regular Schengen zone EU country, with > main-EU-zone power adapters. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From tbaldridge at gmail.com Mon Sep 29 01:28:36 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Sun, 28 Sep 2014 17:28:36 -0600 Subject: [pypy-dev] JITing non looping interpreted functions Message-ID: Let's say I have a bit of interpreter level code code that does something as simple as reduce: acc = None for x in range(10000): acc = interpret(func, wrap(x)) return acc The problem I seem to be hitting is that since there isn't a loop inside the code (in the func variable) interpret is running, I don't seem to be getting an efficient trace. In essence I want the trace to start at the call to interpret, and inline enough of the above loop to end the trace at the next call to interpret. What's the best way to go about doing something like that? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Mon Sep 29 01:31:08 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 28 Sep 2014 19:31:08 -0400 Subject: [pypy-dev] JITing non looping interpreted functions In-Reply-To: References: Message-ID: So, one solution is to simply write this loop in the interpreted language (this is what I did for Topaz, methods such as Array#each are just some ruby code). An alternative is to make a JitDriver for that function, see can see this pattern in pypy/objspace/std/setobject.py Alex On Sun, Sep 28, 2014 at 7:28 PM, Timothy Baldridge wrote: > Let's say I have a bit of interpreter level code code that does something > as simple as reduce: > > acc = None > for x in range(10000): > acc = interpret(func, wrap(x)) > > return acc > > > The problem I seem to be hitting is that since there isn't a loop inside > the code (in the func variable) interpret is running, I don't seem to be > getting an efficient trace. In essence I want the trace to start at the > call to interpret, and inline enough of the above loop to end the trace at > the next call to interpret. > > What's the best way to go about doing something like that? > > Thanks, > > Tim > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Sep 29 12:20:55 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 29 Sep 2014 12:20:55 +0200 Subject: [pypy-dev] JITing non looping interpreted functions In-Reply-To: References: Message-ID: Hi Alex, On 29 September 2014 01:31, Alex Gaynor wrote: > So, one solution is to simply write this loop in the interpreted language > (this is what I did for Topaz, methods such as Array#each are just some ruby > code). An alternative is to make a JitDriver for that function, see can see > this pattern in pypy/objspace/std/setobject.py That's a strange example. Maybe fijal can explain why a jit_driver with no arguments at all is still useful. >> acc = None >> for x in range(10000): >> acc = interpret(func, wrap(x)) For this use case, I'd go with a jit_driver with the "func" as a green argument (or the function's bytecode, if there is one). Then you get one loop compiled for every "func", which is what you want here. A bient?t, Armin. From arigo at tunes.org Tue Sep 30 10:23:45 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 30 Sep 2014 10:23:45 +0200 Subject: [pypy-dev] PyPy Warsaw Sprint (October 21-25th, 2014) In-Reply-To: References: Message-ID: Hi Kostia, On 27 September 2014 21:09, ????? ??????? wrote: > My name is Kostia Lopuhin, and I would like to come to the sprint. Welcome :-) > I would like to work on JIT optimizations, I am particularly > interested in improving short loops, but understanding that this is a > complex topic, I just want to start with something :) > Also maybe I can work on using to use CPython modules from PyPy, using > this http://morepypy.blogspot.ru/2011/12/plotting-using-matplotlib-from-pypy.html > embedding trick by Maciej Fijalkowski, I extended it a little here > https://bitbucket.org/kostialopuhin/embed-cpython Both are possible to do, yes. Arguably, the most important improvement in the JIT is to bring down the warm-up time, but that is not a big issue for short loops. They would however benefit from the second-best improvement: better assembler generation. A bient?t, Armin. From lac at openend.se Tue Sep 30 14:52:47 2014 From: lac at openend.se (Laura Creighton) Date: Tue, 30 Sep 2014 14:52:47 +0200 Subject: [pypy-dev] Paetron .. more ways to get micropayments Message-ID: <201409301252.s8UCql3E007257@fido.openend.se> Two friends of mine is using this to support mobile game development. After 2 weeks of using it, they have hit the 100 USD a month milestone (so the TB proboards forum is about to be ad-free). Read about the service here. http://www.patreon.com/faq I thought it might be something we are interested in. You can see how they are using it here: http://www.patreon.com/TreseBrothers Laura